Abstract

This paper presents a passive autofocus algorithm applicable to interferometric microscopes. The proposed algorithm uses the number of slope variations in an image mask to locate the focal plane (based on focus-inflection points) and identify the two neighboring planes at which fringes respectively appear and disappear. In experiments involving a Mirau objective lens, the proposed algorithm matched the autofocusing performance of conventional algorithms, and significantly outperformed detection schemes based on zero-order interference fringe in dealing with all kinds of surface blemish, regardless of severity. In experiments, the proposed algorithm also proved highly effective in cases without fringes.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical microscopy measurement systems are widely used in characterizing semiconductors [1], nanostructures [2], fluorescence devices [3], and biological entities [4, 5]. Autofocusing functions are classified as active [612] and passive [1323]. Active autofocus systems direct a signal (e.g., laser or ultrasound) toward the sample and then detect the signals that are reflected. By comparing the reflected signal with a given reference point, it is possible to derive the distance between the light source and sample (focal length) for use as a reference in adjusting the lens group. Passive autofocus systems use a camera to capture a large number of sample images at various focal lengths, whereupon an algorithm identifies those that capture the sharpest representation of the sample. Passive autofocus systems are generally more robust, cost effective, and accurate than active systems; however, the response time is usually much slower.

Passive autofocus systems capture a stack of images, some of which are on within the range of focus, and some of which lie outside the range of focus. An algorithm is used to automate the identification of one or more images that lie within that range of focus based on the maximum intensity, the accumulated sum of intensity values, or other metrics. The algorithm most commonly used for interferometric microscopes is based on zero-order interference fringe [24] (i.e., fringes with the multiple wavelengths), which uses the fringe with the maximum intensity to identify the focal point. Table 2 lists five common autofocus algorithms used to control the passive autofocus operation of normal microscopes, including square gradient, image power, Brenner gradient, energy Laplace, and maximum pixel intensity [25]. In these five algorithms, the various operators can be used to calculate the intensity value of each image within a specified area (mask) and then derive the focal point corresponding to the peak position (highest accumulated sum). In recent years, researchers have developed other autofocusing algorithms that are more robust and/or accurate than those listed in Table  2. The use of phase congruency to find the focal plane makes the scheme in [19] is highly robust to noise from sensors under a range of illumination conditions, while providing a good balance between defocus sensitivity and effective range. The scheme in [20] maximizes the image score using six different image scorings algorithms to deal with a wide range of excitation wavelengths. Their automated multi-axis alignment procedure also enhances the versatility of the system. Using as few as two intermediate images, the scheme in [21] is able to find the focal in phase-contrast (bright-field) microscopy or fluorescence microscopy images from pathology slides.

In the current study, we developed a passive autofocus algorithm based on the number of slope variations associated with the effect of the diffusivity of light reflected from the sample surface and the ideal interferometric fringes. Diffuse reflections from smooth surfaces (scattering evenly in many directions) results in surface reflections with distinct variations in intensity. Ideal interferometric fringes from perfectly smooth surfaces contain only one fringe of maximum intensity (zero-order fringe) with several other fringes of lower intensity. Although the variations in intensity on the smooth surface plane is high, the maximum intensity of the fringe (i.e. zero-order fringe) can effectively reduce the variations in intensity on the smooth surface plane on the focal plane. By contrast, when the variations in intensity on the smooth surface plane is still high, the weaker intensity of the fringe (i.e. appearance and disappearance of fringes) can not effectively reduce the variations in intensity on the smooth surface plane on the focal plane. And thus, the variations in intensity on the smooth surface plane are different according to the ideal appearance fringe, the ideal maximum fringe (i.e. zero-order fringe), and the ideal disappearance fringe. The proposed algorithm uses the number of slope variations in an image mask to identify focus-inflection points in order to locate the image that corresponds to the focal plane. Implementing the proposed algorithm one time produces one known focal point. Implementing the algorithm multiple times produces multiple focal points, which can be used to compile 2D and 3D profiles for use in rebuilding regions affected by dirt or imperfections (hereafter referred to as blemishes) on the sample surface. Note that this is not possible using zero-order interference fringe. In this study, blemishes are differentiated as uniform and non-uniform. The even scattering of reflected light by uniform surface blemishes produces largely intact fringes with small variations in intensity (shown in Fig. 9(b)), such that the fringe with the highest intensity is located on the focal plane. The uneven scattering of reflected light by non-uniform surface blemishes produces non-intact fringes with large variations in intensity (shown in Fig. 13(b)), such that the fringe with the highest intensity is located on the defocal plane. The number of slope variations in an image mask can also be used to identify the two neighboring defocal planes in order to reduce the scope of the area that must be re-scanned (i.e., the distance between the first and last images) to enhance efficiency by reducing computational overhead. The advantage of the proposed algorithm is that focal points correspond to the lowest number of slope variations, and blemishes correspond to higher numbers of slope variations. Thus, the trend associated with the focal point differs from that of the blemish. However, in terms of the intensity of the zero-order interference fringe, the trends associated with the focal plane and the blemish are the same, which means algorithms based on the zero-order interference fringe cannot be used to differentiate smooth surface areas from blemishes.

In Section 2, we outline the proposed autofocus algorithm, in which the number of slope variations is used to identify inflection points associated with focal distance. In Section 3, we use simulations to show that the ideal theoretical fringe influences the variations in the intensity of the reflections of the sample surface. We also demonstrate that the numbers of slope variations differ when the ideal appearance fringe, the maximum fringe on the focal plane, and the disappearance fringe occur. We assess the robustness of the proposed algorithm by comparing the number of fringe variations found in the simulations with those measured in the experiments. Section 4 outlines experiments used to assess the feasibility of using inflection points for focusing, and compares the proposed scheme with existing systems. Conclusions are drawn in Section 5.

2. Implementation of proposed autofocus algorithm

Figure 1 outlines the experiment setup of an interferometric microscope, in which a Mirau objective is installed on a normal microscope to produce interference fringes. The light path is indicated by the following labels: Lens_1, Beam splitter_1, Lens_2, pixel C (on the step height sample), and Lens_3. The proposed autofocus algorithm uses the number of slope variations in the mask to identify the focal plane at pixel $C({{i_c},{j_c}} )$. The step-height sample is held on a Piezo scanning stage (i.e., PZT), which moves the sample a set distance R μm away from the Mirau objective lens in order to produce interference fringes on the focal plane. As shown in Fig. 1, when the optical path between Beam splitter_2 and the reference mirror is equal to the optical path between Beam splitter_2 and pixel C on the sample, fringes are produced by the Mirau objective. The number of slope variations in the mask is indicated by ${N_s}\textrm{(z)},$ where z refers to a stack of M images labeled ${0^{\textrm{th}}},\textrm{ }{1^{\textrm{st}}},\textrm{ } \ldots ,{(\textrm{M} - 1)^{th}}$ . As shown in Fig. 1 and Fig. 2(a), an image is first captured with the sample located close to the Mirau objective. This image is designated the 0th image, which is $(0 \times R)$ μm away from the lens along the z-axis. The scanning stage then moves the sample a fixed distance R μm from the lens and captures the 1st image in the stack, corresponding to the position located $R( = 1 \times R)$ μm from the lens. The 2nd image corresponds to the position located $2R( = 2 \times R)$ μm from the lens, and so on. After M cycles, the resulting M images are labeled 0th, 1st, 2nd, 3rd, …, ($M - 1$)th. As shown in Fig. 2(a), the x and y coordinates are measured in pixels (within a single image), and the z coordinate is measured by the number of images in the stack. As shown in the masks of${\; }{M_{ - z}}$, ${M_z}$, and ${M_{ + z}}$ in Fig. 2(b), an ideal theoretical interferometric fringe from a white light source would be smooth and only vary on the fringe peaks. In that image, the fringe with the maximum intensity (i.e., the zero-order fringe) lies within the green area and is shown in the corresponding${\; }{M_z}$ image. One weak-intensity fringe lies within the red area on the left and is shown in the corresponding the${\; }{M_{ - z}}$ image, whereas the other weak-intensity fringe lying within the red area on the right is shown in the corresponding${\; }{M_{ + z}}$ image. Figure 2(c) illustrates the large number of high-amplitude variations within a cross-section passing through pixel C (i.e.,$\; ({{i_c},{j_c}} )$) obtained from a step-height sample with a smooth surface. The high-amplitude variations in the cross-section observed in Fig. 2(c) were produced from diffuse reflections from the sample surface. The three areas marked in Fig. 2(c) indicate where images${\; }{M_{ - z}},{\; \; }{M_z},{\; }\textrm{and}{\; }{M_{ + z}}$ were captured. Variations in the fringe presented in Fig. 2(d) are produced due to the weaker fringe intensity in the corresponding${\; }{M_{ - z}}$ in Fig. 2(b) and variations in the stronger intensity of light reflected from the sample surface (produced by the diffuse reflection) in the corresponding${\; }{M_{ - z}}$ in Fig. 2(c). Thus, Fig. 2(d) presents the captured ${\; }{M_{ - z}}$ image, where the cross-section of the fringe indicates variations. Similarly, the cross-section of the fringe in the ${M_{ + z}}$ image (i.e. Figure 2(f)) indicates variations due to the weaker fringe intensity and variations in the intensity of light reflected from the sample surface. By contrast, the maximum intensity of the fringe in the corresponding${\; }{M_z}$ in Fig. 2(b) effectively reduces variations in the intensity of the reflected light from sample surface in the corresponding${\; }{M_z}$ in Fig. 2(c). Thus, the cross-section of the fringe in ${M_z}$ on the focal plane (i.e., Fig. 2(e)) is smoother than those in Fig. 2(d) and Fig. 2(f). The short coherence length in interferometric fringes produced by the white light source in Fig. 1 are sensitive to variations in the intensity of light reflected from the sample surface. The maximum intensity of the fringe can reduce variations in the intensity of light reflected from the sample surface. ${N_s}\textrm{(z)}$ is the lowest value for the fringe intensity, as shown in Fig. 2(e). The weaker the intensity of the fringe is, the stronger the variations in the intensity of the light reflected from the sample surface are. Therefore, ${N_s}\textrm{(}{M_{ - z}}\textrm{)}$ in Fig. 2(d) and ${N_s}\textrm{(}{M_{ + z}}\textrm{)}$ in Fig. 2(f) are always higher than ${N_s}\textrm{(}{M_z}\textrm{)}$ in Fig. 2(e).

 figure: Fig. 1.

Fig. 1. Experiment setup of interferometric microscope in this study.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2.

Fig. 2. (a) Stack of M images labeled z = 0th, 1st, …, $({M - 1} )$th. At pixel C, plane ${M_{ - z}}$ indicates the appearance of fringes corresponding to a higher ${N_s}({{M_{ - z}}} )$; focal plane ${M_z}$ indicates the appearance of smooth fringes on the focal plane corresponding to the lowest ${N_s}({{M_z}} )$; plane ${M_{ + z}}$ indicates the disappearance of the fringe corresponding to a higher ${N_s}({{M_{ + z}}} )$. ; (b) Ideal smooth fringe produced from a white light source; (c) Jagged variations in the intensity of light from the sample surface (produced by diffuse reflection); (d) Appearance of the fringe produced from the corresponding ${M_{ - z}}$ images in Fig. 2(b) and Fig. 2(c); (e) Maximum intensity fringe (i.e., zero-order fringe) produced from the corresponding ${M_z}$ images in Fig. 2(b) and Fig. 2(c); (f) Disappearance of the fringe produced from the corresponding ${M_{ + z}}$ images in Fig. 2(b) and Fig. 2(c).

Download Full Size | PPT Slide | PDF

Step 1: Determining the number of slope variations in the mask, ${N_s}(z )$

As shown in Fig. 2(d)-(f), pixel C$({{i_c},{j_c}} )$ was adopted as the center pixel in an$\; {N_x} \times {N_y}\; $ mask (row ${\times} $ column), where all of the pixels within the mask are described as follows:

$$\left\{ {\begin{array}{{c}} {{i_1} = {i_c} - \frac{{{N_x} - 1}}{2},{i_2} = {i_c} + \frac{{{N_x} - 1}}{2},}\\ {{j_1} = {j_c} - \frac{{{N_y} - 1}}{2},{j_2} = {j_c} + \frac{{{N_y} - 1}}{2}.} \end{array}} \right.$$
In this subsection, a $9 \times 1$ mask is used to illustrate the proposed algorithm. As shown in Fig. 2(d)-(f), pixel C$({{i_c},{j_c}} )$=$({8,{j_c}} )$ and the $9 \times 1$ mask comprises$\; ({{i_1},{j_1}} )= ({4,{j_c}} )$to$\; ({{i_2},{j_2}} )= ({12,{j_c}} )$. The number of slope variations associated with interference fringes within the mask is calculated as follows:
$${V_s}({i,j,z} )= \frac{{[{I({i,j,z} )- I({i - 1,j,z} )} ]}}{{abs\{{[{I({i,j,z} )- I({i - 1,j,z} )} ]} \}}}$$
where $({i,j} )$ indicates the position of the pixel of interest and$\; i = {i_1}\sim {i_2}$, $j = {j_1}\sim {j_2}$; z refers to the labeled images; I indicates the intensity value of image intensity; and${\; }{V_s}$ indicates the slope sign between two neighboring pixels in image z.

The following equation is used to determine the position of slope variations in the mask:

$$\left\{ {\begin{array}{{c}} {if{V_s}({i,j,z} )\times {V_s}({i - 1,j,z} )={-} 1,\; {P_{ - 1}}({i,j,z} )={-} 1}\\ {if{V_s}({i,j,z} )\times {V_s}({i - 1,j,z} )={+} 1,\; {P_{ - 1}}({i,j,z} )= 0} \end{array}} \right.$$
In the following equation,${\; }{N_s}\textrm{(z)}$ indicates the number of slope variations associated with center pixel C$\; ({{i_c},{j_c}} )$ in image z:
$${N_s}(z )= \mathop \sum \limits_{i = {i_1}}^{{i_2}} \mathop \sum \limits_{j = {j_1}}^{{j_2}} abs[{{P_{ - 1}}({i,\; j,\; z} )} ].$$
In Fig. 2(d)-(f), the results from Eq. (2) are indicated by + and$-$ symbols; the results from Eq. (3) are indicated by purple arrows; and the results from Eq. (4) are indicated by purple numerals 1, 2, …, 8. Here, ${N_s}\textrm{(z) = }{N_s}\textrm{(}{M_{ - z}}\textrm{) = }8$ and ${N_s}\textrm{(z) = }{N_s}\textrm{(}{M_{\textrm{ + }z}}\textrm{) = }8$, whereas ${N_s}\textrm{(z) = }{N_s}\textrm{(}{M_z}\textrm{) = 2}$. Figure 3(a) presents a diagram of ${N_s}\textrm{(z)}$, where z=0∼M-1. Note that the two highest values, ${N_s}\textrm{(}{M_{ - z}}\textrm{) = }8$ and${\; }{N_s}\textrm{(}{M_{\textrm{ + }z}}\textrm{) = }8$ are indicated in red and the lowest value ${N_s}\textrm{(}{M_z}\textrm{) = }2$ is marked in green.

 figure: Fig. 3.

Fig. 3. Implementation of proposed autofocus algorithm in three steps: (a) Step 1: Determine the total number of the slope variations,${N_s}(z )$, in stack of z images, where z=1∼ $M - 1\; $images; (b) Step 2: Derive ${f_1}(z )$ and ${f_2}(z )$ from ${N_s}(z )$ in the diagram; (c) Step 3: Identify the focus-inflection points, ${M_i}^{\prime}$, from curve ${f_1}(z )$. And then, identify focal point, ${M_z}$, using Eq. (1)1 for interferometry microscopes.

Download Full Size | PPT Slide | PDF

Step 2: Derive the values for ${f_1}(z )$ and ${f_2}(z )$ from ${N_s}(z )$ in the diagram using the discrete Fourier and inverse discrete Fourier transforms.

The discrete Fourier Transform written is as follows:

$$F(u )= \mathop \sum \nolimits_{z = 0}^{M - 1} {N_s}(z )\times {e^{ - j2\pi \left( {\frac{{uz}}{M}} \right)}}$$
where u, z=0, 1, 2,…, $M - 1$; z is a labeled image; and u indicates the frequency domain corresponding to z.
$$\left\{ {\begin{array}{{c}} {{K_1}(u )= 1,if\frac{{M - 1}}{2} - {C_b} \le u \le \frac{{M - 1}}{2} + {C_b}}\\ {{K_1}(u )= 0,others} \end{array}} \right.$$
$$\left\{ {\begin{array}{{c}} {{K_2}(u )= 0,if\frac{{M - 1}}{2} - {C_b} \le u \le \frac{{M - 1}}{2} + {C_b}}\\ {{K_2}(u )= 1,others} \end{array}} \right.$$
In Eqs. (6) and 7, the ${C_b}$ function is used to differentiate between the actual surface and surface blemishes when applied to an interferometric microscope. If pixel C is located above an unblemished region, then the ${N_s}\textrm{(z)}$ values of pixels within the mask present a normal distribution (see Fig. 7(a)). If pixel C is located above a surface blemish, then the ${N_s}\textrm{(z)}$ values of pixels within the mask present an abnormal distribution (see Fig. 8(b)). Note that ${C_b}$ must less than $\frac{{M - 1}}{2}$ in order to differentiate between the actual surface and blemishes. In the current study, this value was set at 15 pixels in the frequency domain (unit: $\frac{1}{{R\; \mu m \times pixels}}$).

Inverse Discrete Fourier Transform:

Substituting Eqs. (5) and 6 into Eq. (8) gives us curve $\; {f_1}(z )$. Similarly, substituting Eqs. (5) and 7 into Eq. (9) gives us curve ${f_2}(z )$.

$${f_1}(z )= \frac{1}{M}\mathop \sum \nolimits_{u = 0}^{M - 1} abs\left( {F(u )\times {K_1}(u )\times {e^{j2\pi \left( {\frac{{uz}}{M}} \right)}}} \right)$$
$${f_2}(z )= \frac{1}{M}\mathop \sum \nolimits_{u = 0}^{M - 1} abs\left( {F(u )\times {K_2}(u )\times {e^{j2\pi \left( {\frac{{uz}}{M}} \right)}}} \right)$$
Step 3: Determining focus-inflection points from curve ${f_1}(z )$, and then determining focus point of ${M_z}$

In Eq. (1)0, $M_i^{\prime}$ indicates the focus-inflection points, $z^{\prime}$ indicates the labeled images (from 1st to $({M - 2} )$th), and three neighboring images ($z^{\prime} - 1,z^{\prime},\; \textrm{and}{\; }z^{\prime} + 1$) are used to determine the value of -1 in ${f_1}({z^{\prime}} )$. If the calculated result is -1, then $z^{\prime}$ is regarded as ${M_i}^{\prime}$. Equation (10) is used to obtain seven values for ${M_i}^{\prime}$ (${M_1}^{\prime},{M_2}^{\prime},\ldots ,{M_7}^{\prime}$), as indicated by the hollow circles in Fig. 3(c):

$${M_i}^{\prime} = z^{\prime},\; if\frac{{[{{f_1}({z^{\prime}} )- {f_1}({\textrm{z}^{\prime} - 1} )} ]\times [{{f_1}({\textrm{z}^{\prime} + 1} )- {f_1}({\textrm{z}^{\prime}} )} ]}}{{abs\{{[{{f_1}({\textrm{z}^{\prime}} )- {f_1}({\textrm{z}^{\prime} - 1} )} ]\times [{{f_1}({\textrm{z}^{\prime} + 1} )- {f_1}({\textrm{z}^{\prime}} )} ]} \}}} ={-} 1$$
where $\; z^{\prime} = 1, \ldots ,{\; }M - 2$.

Pixel C on focal plane ${M_4}^{\prime}({i.e.\; {M_z}} )$ can be identified from among ${M_i}^{\prime}$ focus-inflection points using Eq. (1)1 for interferometry microscopes.

$${\textbf {arg}}\; {\textbf{min}}\; Q({M_i^{\prime}} )= \frac{{{f_1}({M_i^{\prime}} )}}{{\frac{1}{{{V_b}}}\mathop \sum \nolimits_{z = {M_i}^{\prime} - \frac{{{V_b} - 1}}{2}}^{{M_i}^{\prime} + \frac{{{V_b} - 1}}{2}} abs[{{f_2}(z )} ]}}$$
where ${V_b}$ refers to the mask associated with each ${M_i}^{\prime}$ (where ${M_i}^{\prime} = {M_1}^{\prime},{M_2}^{\prime},\ldots ,{M_7}^{\prime}$) (see Fig. 3(c)). Parameter ${V_b}$ is used in conjunction with the curve of ${f_2}(z )$ to differentiate between the surface (Fig. 7(a)) and blemishes on the surface (Fig. 8(b)). In the current study, ${V_b}$, the number of images for the interferometric microscope, was set at 21. Equation (1)1 includes three processes, represented as the following functions: $Q({M_i^{\prime}} )$, ${\textbf{min}}\; Q({M_i^{\prime}} )$, and ${\textbf {arg}}\; {\textbf{min}}\; Q({M_i^{\prime}} )$. Using ${f_1}({{M_i}^{\prime}} )$ (where ${f_1}$ is in the numerator position) and $\frac{1}{{{V_b}}}\mathop \sum \nolimits_{z = {M_i}^{\prime} - \frac{{{V_b} - 1}}{2}}^{{M_i}^{\prime} + \frac{{{V_b} - 1}}{2}} abs[{{f_2}(z )} ]$ (where ${f_2}$ is in the denominator position), we derive seven values for functions $Q({M_i^{\prime}} )$, (i.e., $Q({{M_1}^{\prime}} ),\; Q({{M_2}^{\prime}} ), \ldots ,Q({{M_7}^{\prime}} ))$.We then use function ${\textbf{min}}\; Q({{M_i}^{\prime}} )$ to derive $Q({{M_4}^{\prime}} )$ from the above-mentioned seven values. Finally, we use function ${\textbf {arg}}\; {\textbf{min}}\; Q({{M_i}^{\prime}} )$ to derive ${M_4}^{\prime}({ = {M_z}} )$ from $Q({{M_4}^{\prime}} )$, which are indicated by the solid circles in Fig. 3(c). Note that Fig. 3(c) presents the distributions of ${f_1}$ and ${f_2}$ obtained from a smooth surface using an interferometric microscope. The curves of ${f_1}(z )$ and ${f_2}(z )$ in Fig. 3(c) revealed high ${N_s}(\textrm{z} )$ values in ${M_3}^{\prime}$ and ${M_5}^{\prime}$, and low ${N_s}(\textrm{z} )$ values in ${M_4}^{\prime}$. In Eq. (1)1, the ${f_1}({{M_i}^{\prime}} )$ operator is used to derive the focal point, whereupon ${f_2}(z )$ is used to double-check the autofocusing result of ${f_1}({{M_i}^{\prime}} )$ using the following operator: $\frac{1}{{{V_b}}}\mathop \sum \limits_{z = {M_i}^{\prime} - \frac{{{V_b} - 1}}{2}}^{{M_i}^{\prime} + \frac{{{V_b} - 1}}{2}} abs[{{f_2}(z )} ]$. To summarize, the focal point of C$({{i_c},{j_c}} )$ is on image ${M_4}^{\prime}$ (indicated by the solid circle in Fig. 3(c)), which contains the fringe with the highest intensity. The fringe appears in the neighboring ${M_3}^{\prime}$ image, and disappears in the neighboring ${M_5}^{\prime}$ image.

3. Simulation results for slope variations in images from interferometric microscope

The experiment described below was conducted using a microscope (Olympus BX51M) with a 100W halogen lamp light source, a 10X normal objective lens (RMS10X PLAN ACHROMAT OBJECTIVE0.25 NA, WD10.6 mm), a 10X Mirau objective lens (Mirau interferometer, Nikon CF Plan, JAPAN), a CCD camera (DCU223C, 1024 × 768 Resolution, Color, pixel size 4.65μm×4.65μm), and Piezo scanning stage (LPS710/M), as shown in Fig. 1. The performance of the proposed algorithm was evaluated by performing a series of proprietary MATLAB simulations on a notebook computer equipped with an Intel Core i5-6200U, 2.31GHz and 4 GB of RAM.

Figure 4(a)-(c) present simulated fringes, and the corresponding four solid circles are pixels in rows 150, 450, 722, and 986. Each solid circle indicates the center of a 201×1 mask, indicated by the green area. In the green mask in Fig. 4, ${M_p}$ indicates the varied intensity without fringes on the surface of the step height sample ; ${M_{ - z}}$ indicates the appearance of fringes; ${M_z}$ indicates the maximum intensity fringe (i.e., zero-order fringe); and ${M_{ + z}}$ indicates the disappearance of fringes. Figure 4(a) corresponds to Fig. 2(c); Fig. 4(b) corresponds to Fig. 2(b); Fig. 4(c) is the result of Fig. 4(b) multiplied by Fig. 4(a) corresponding to Fig. 2(d)-(f). Figure 5(a)(b)(c)(d) present the experiment results obtained using the set-up in Fig. 1. Figure 5(a) presents the captured ${M_p}$ image without fringes, and the cross-section passing through pixel C (515, 1265) with the 201×1 mask, corresponding to ${N_s}({{M_p}} )$ in Fig. 4(c). Figure 5(b) is the captured ${M_{ - z}}$ image showing the appearance of fringes and the cross-section passing through pixel C (515, 1265) with the 201×1 mask corresponding to ${N_s}({{M_{ - z}}} )$ in Fig. 4(c). Figure 5(c) is the captured ${M_z}$ image with the maximum intensity fringe and the cross-section passing through pixel C (515, 1265) with the 201×1 mask corresponding to ${N_s}({{M_z}} )$ in Fig. 4(c). Figure 5(d) is the captured ${M_{ + z}}$ image showing the disappearance of fringes and the cross-section passing through pixel C (515, 1265) with the 201×1 mask, corresponding to ${N_s}({{M_{ + z}}} )$ in Fig. 4(c).

 figure: Fig. 4.

Fig. 4. (a) Simulated variations in intensity of the light reflected from the sample surface in Fig. 2(c) with ${N_s}({{M_p}} )= 124$, ${N_s}({{M_{ - z}}} )= 137$, ${N_s}({{M_z}} )= 131$, and ${N_s}({{M_{ + z}}} )= 143$; (b) simulated ideal interferometric fringe of white light source in Fig. 2(b) with ${N_s}({{M_p}} )= 0$, ${N_s}({{M_{ - z}}} )= 4$, ${N_s}({{M_z}} )= 4$, and ${N_s}({{M_{ + z}}} )= 4$; (c) simulated fringe obtained from Fig. 4(a) multiplied by Fig. 4(b) with ${N_s}({{M_p}} )= 124$, ${N_s}({{M_{ - z}}} )= 20$, ${N_s}({{M_z}} )= 14$, and ${N_s}({{M_{ + z}}} )= 45$.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. (a) Captured image without fringes and corresponding cross-section passing through pixel C, and ${N_s}({{M_p}} )= 65$ which corresponds to ${N_s}({{M_p}} )$ in Fig. 4(c); (b) captured image with appearance of fringes and corresponding cross-section passing through pixel C, and ${N_s}({{M_{ - z}}} )= 43$ which corresponds to ${N_s}({{M_{ - z}}} )$ in Fig. 4(c); (c) captured image with maximum intensity fringe (i.e., zero-order fringe) and corresponding cross-section passing through pixel C, and ${N_s}({{M_z}} )= 18$ which corresponds to ${N_s}({{M_z}} )$ in Fig. 4(c); (d) captured image with disappearance of fringes and corresponding cross-section passing through pixel C with ${N_s}({{M_{ + z}}} )= 50$ which corresponds to ${N_s}({{M_{ + z}}} )$ in Fig. 4(c).

Download Full Size | PPT Slide | PDF

The simulation depicted in Fig. 4(a) was conducted as follows. In experiments, the intensity value of the cross-section of Fig. 5(a) was approximately 140. Thus, the straight line with an intensity value of 140 is regarded as ${I_{in}}$. The point spread function (PSF) of Gaussian blurring and additive white Gaussian noise (AWGN) are used to simulate variations in the intensity of the light reflected from the sample surface in Fig. 1. Gaussian blurring and additive white noise were implemented using three Matlab functions, as follows:

  • a. $PSF = fspecial('gaussian',10,2 )$, where 10 refers to the size of the filter, 2 refers to the standard deviation, and $PSF$ refers to Gaussian blurring. The parameters of 10 and 2 were set according to the experiment result of Fig. 5(a).
  • b. ${I_{gaussianblurring}} = imfilter({I_{in}},\; PSF,\; 'conv',\; 'circular' )$, where ${I_{in}}$ is the input image (i.e., the straight line with an intensity value of 140), $'conv'$ indicates convolution, ‘$circular$’ is a boundary option, and ${I_{gaussianblurring}}$ refers to the output image obtained following the convolution of ${I_{in}}$ and PSF.
  • c. ${I_{gaussiannoise}} = awgn({{I_{gaussianblurring}},SNR} )$, where $SNR$ indicates the signal-to-noise ratio (unit: dB) and ${I_{gaussiannoise}}$ is obtained through the convolution of additive white Gaussian noise via ${I_{gaussianblurring}}$. The parameter of SNR was set at 1 according to the cross-section of the experiment result of Fig. 5(a).

Figure 4(a) presents the simulated cross-section of the surface on the sample (without fringing), which is similar to the experiment result of Fig. 5(a). Compared to ${N_s}({{M_p}} )= 65$ in Fig. 5(a), ${N_s}({{M_p}} )= 124$ in Fig. 4(a) means the simulation is accurate.

Figure 4(b) presents a simulation of the ideal theoretical interferometric fringe of the white light source. In Fig. 4(b), the intensity value is equal to 113 in pixel rows 1∼300 and 1101∼1400. A smooth fringe is produced by Eqs. (1)2 and 13 in pixel rows 301∼1100. In Eq. (1)2 (from [26]), ${I_{fringe1}}$ is the superposition result of the three wavelengths ${\lambda _1} = 400nm,\; {\lambda _2} = 550nm,{\; }{\lambda _3} = 632.8nm$. In Eq. (1)2, the parameter x represents the pixel row and $ro{w_{scale}}$ is used to tune the fringe shape. In Eq. (1)3, the parameter of ${I_{scale}}$ is used to tune the intensity scale ${I_{fringe2}}$ in Fig. 4(b). The parameter ${I_{scale}}$ was set with 230 in order to make the simulated fringe ${I_{fringe2}}$ in Fig. 4(b) similar to the experimental fringe of Fig. 5(c).

$${I_{fringe1}} = \textrm{cos}\left( {\frac{{4\pi }}{{{\lambda_1}}}x \times ro{w_{scale}}} \right) + \textrm{cos}\left( {\frac{{4\pi }}{{{\lambda_2}}}x \times ro{w_{scale}}} \right) + \textrm{cos}\left( {\frac{{4\pi }}{{{\lambda_3}}}x \times ro{w_{scale}}} \right)$$
$${I_{fringe2}} = \frac{{{I_{fringe1}}}}{{Max\; ({I_{fringe1}})}} \times {I_{scale}}$$
Figure 4(c) is the re-scaled intensity result of Fig. 4(b) multiplied by Fig. 4(a), which is similar to the experimental fringe intensity of Fig. 5(c). In Fig. 4(a) and Fig. 4(c), both of the two corresponding values of ${N_s}({{M_p}} )$ are equal to 124 in the green 201×1 mask. Thus, ${N_s}({{M_p}} )$ in Fig. 4(c) represents the simulated variations in the cross-section without the fringe. Moreover, ${N_s}({{M_p}} )= 124$ is the highest value in Fig. 4(c). ${N_s}({{M_p}} )= 65$ is the highest value in Fig. 5(a)-(d). As shown in the green mask of ${M_{ - z}}$ in Fig. 4(b), there are four peaks marked with an “${\times} $”; these represent four variations. The value of ${N_s}({{M_{ - z}}} )$ is therefore equal to 4. Similarly, ${N_s}({{M_z}} )$ and ${N_s}({{M_{ + z}}} )$ are equal to 4, respectively. Thus, the simulated fringe of Fig. 4(b) is smooth. Although the varied intensity contains the high value of ${N_s}({{M_z}} )= 131$ in Fig. 4(a), the ideal maximum intensity fringe with ${N_s}({{M_z}} )= 4$ in Fig. 4(b) effectively reduces the intensity variations to the lowest value of ${N_s}({{M_z}} )= 14$ in Fig. 4(c). The experimental fringes of Fig. 5(c) behave similarly with ${N_s}({{M_z}} )= 18$. By contrast, the intensity variations also exhibit the high value of ${N_s}({{M_{ - z}}} )= 137$ in Fig. 4(a). However, the appearance of the ideal fringe with ${N_s}({{M_{ - z}}} )= 4$ in Fig. 4(b) contains the weaker intensity and cannot effectively reduce the intensity variations to the value of ${N_s}$. Thus, in Fig. 4(c), ${N_s}({{M_{ - z}}} )$ is equal to 20, which is higher than ${N_s}({{M_z}} )= 14$. The experimental fringes of Fig. 5(b) also exhibit variations in intensity with ${N_s}({{M_{ - z}}} )= 43$ (which is higher than ${N_s}({{M_z}} )= 18$ in Fig. 5(c)). Therefore, the simulated fringe of Fig. 4(c) behaves similarly to the experimental fringe of Fig. 5(a)-(d). Moreover, the simulation results presented in Fig. 4 and the experimental results presented in Fig. 5 indicate that the lowest value for ${N_s}(z )$ occurs at the maximum intensity of the fringe, while the highest value for ${N_s}(z )$ occurs in the cross-section without fringes.

4. Experimental results

The parameter settings of the experiments are listed in Table 1. In these experiments, three lights (normal light, strong light, and filtered light) were used to demonstrate the effect of the ideal smooth interferometric fringes and the variations in the intensity of the light reflected from the sample surface. In all of the step height samples in Table 1, the high plane was marked as Surface A and the low plane was marked as Surface B. In the following subsection, we present the results of the proposed algorithm using various numbers of captured images with various movements of the scanning stage and masks of various sizes. The working distance of $M \times R$ μm needs to be greater than the length of the step height of the sample. The smaller parameter R is, the higher the accuracy of the auto-focusing algorithm is. If a large value is used for R, the user should confirm the fringe is located on the mask of pixel C in the images in order to perform the proposed algorithm. For example, the value of R was approximately equal to 0.86 μm between Fig. 5(b) and Fig. 5(c). It was also approximately 0.86 μm between Fig. 5(c) and Fig. 5(d)). In Fig. 5, the shifting fringe is in the mask of pixel C.

Tables Icon

Table 1. Experiment parameter settings

4.1.1 Case of normal light

The efficacy of the proposed algorithm was assessed using a standard step height sample (1.8 μm, from VLSI Standards, Inc.) with a Mirau objective lens to produce the superposition of fringes at multiple wavelengths using a halogen lamp as a light source, as shown in Fig. 1. A total of 360 images (400×900 px) were captured. In each image, the mask over pixel C (9×9 px) was centered at (200, 450) for use in locating the focal point on Surface A. The distance between any two images (i.e. R) was 0.02485 μm. Figure 6(a) and Fig. 6(b) detail Steps 1-2, and Fig. 7(a) details Step 3. Figure 7(b) presents the 0th image with pixel C (200, 450) indicated by a dark solid circle and the 9×9 mask indicated by a square. The number of slope variations was calculated as follows: ${N_s}(z )= {N_s}(0 )= 4$. Based on the eight focus-inflection points (${M_1}^{\prime},{\; }{M_2}^{\prime},{\; } \ldots ,{\; }{M_8}^{\prime}$) indicated in Fig. 7(a), the point at which pixel C was in focus corresponded to the 220th image (i.e., ${M_6}^{\prime}$). Two neighboring inflection positions were also identified in the 174th image (${M_5}^{\prime}$) and 256th image (${M_7}^{\prime}$). As shown in Fig. 7(d), the 220th image (${M_6}^{\prime}$) image presented smooth fringes and the lowest ${N_s}({{M_6}^{\prime}} )$ because the maximum intensity fringe reduced variations in the intensity of the light reflected off the sample surface. By contrast, these variations increased the value of${\; }{N_s}({{M_5}^{\prime}} )$ corresponding to the appearance of weaker intensity fringes, and also increased the value of ${N_s}({{M_7}^{\prime}} )$ corresponding to the disappearance of weaker intensity. In the cross-section of Fig. 7(d), the position of pixel C corresponds to the region with fringes of highest intensity, which is the same focal point identified by algorithms based on zero-order interference fringe. Therefore, both algorithms find the same focal plane. Note also that application of the neighboring ${M_{ - z}}$ and ${M_{ + z}}$ planes reduced the range of images required for re-scanning from the 0th-359th to the 174th-256th.

 figure: Fig. 6.

Fig. 6. Illustration of functions performed by proposed algorithm: (a) Step 1; (b) Step 2.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. Results obtained using set-up presented in Fig. 1: (a) Step 3 using Eq. (1)1; (b) 0th image (z=0), and cross-section passing through pixel C; (c) 174th image with focus-inflection point z=${M_5}^{\prime}$=174, and cross-section showing appearance of fringes with higher values for ${N_s}({{M_5}^{\prime}} );$ (d) 220th image with focus-inflection point z=${M_6}^{\prime}$=220, and cross-section showing smooth fringes with lowest value of ${N_s}({{M_6}^{\prime}} )$; (e) 359th image with focus-inflection point z=${M_7}^{\prime}$=359, and cross-section with fringes and higher values for ${N_s}({{M_7}^{\prime}} )$; (f) 359th image (z=359) and cross-section passing through pixel C.

Download Full Size | PPT Slide | PDF

4.1.2 Focusing accuracy in case of normal light

As shown in Fig. 7(d), the focal point at C (200, 450) corresponded to the 220th image. The blue line in Fig. 8(a) indicates the 900 focal points generated by applying the proposed algorithm 900 times, whereas the red line indicates the results obtained by applying the algorithm based on zero-order interference fringe (Subsection 4.1.1) to the same 360 images. Note that the two lined nearly overlap. Note also that the dark solid circle corresponds to focal point C (200, 450) in the 220th image, and it is marked in blue-green on the left axis “Height (images)” in Fig. 8(a). The distance between any two images (i.e., R) was 0.02485μm; the height values are expressed in green on the right axis “Height (μm)” in Fig. 8(a). The blue line in Fig. 8(a) indicates that the step height between Surface A (column=1∼500) and Surface B (column=750∼900) obtained using the proposed algorithm was 1.74 μm, which is within 0.06 μm (=1.8-1.74μm) of the ground-truth value. By contrast, the result obtained based on zero-order interference fringe was 1.73 μm, which corresponds to accuracy of 0.07μm (=1.8-1.73μm).

 figure: Fig. 8.

Fig. 8. (a) Red and blue curves are 2D profiles obtained by running proposed algorithm and zero-order interference fringe algorithm through 900 iterations, where pixel C is located in a smooth surface area and pixel D is located in the region of a uniform blemish; (b) focus-inflection points at pixel D obtained using proposed algorithm; (c) focus-inflection points at pixel C obtained using proposed algorithm with modified 91×3 mask.

Download Full Size | PPT Slide | PDF

The proposed algorithm proved to be more sensitive than the algorithm based on zero-order interference fringe, particularly in dealing with images showing signs of dirt or sample defects. Point D at (200, 630) in Fig. 7(d) indicates the areas with uniform surfaces blemishes. The proposed algorithm identified the focus-inflection point corresponding to the minimum ${N_s}(z )$ in the 348th image, indicated by the pink solid circles in Fig. 8(a) and Fig. 8(b). The distribution of ${f_1}(z )$ and ${f_2}(z )$ values associated pixel D in Fig. 8(b) differs those that associated with pixel C in Fig. 7(a), indicating that the proposed algorithm was able to differentiate between smooth surfaces with uniform surfaces blemishes. The proposed algorithm uses the modified 91×3 mask (from the original 9×9 mask), keeping the other parameters constant; the result is shown in Fig. 8(c). In Fig. 8(c), pixel C is in focus in the 219th image (i.e., ${M_8}^{\prime}$). Similarly, pixel C is in focus in the 220th image in Fig. 7(a) and Fig. 7(d). In Fig. 7(a) and Fig. 8(c), the symmetric nature of ${f_1}(z )$ and ${f_2}(z )$ enable the smaller 9×9 mask to find the focal point of C. Parameter ${V_b}$ is used to tune the sensitivity of ${f_2}(z )$ in Eq. (1)1 to surface blemishes. Curve ${f_2}(z )$ in Fig. 7(a) is smallest in the 220th image (and the 219th image in Fig. 8(c)). This differs in the case of surface blemishes (Fig. 8(b)), when curve ${f_2}(z )$ is greatest in the 218th image. Thus, the value of ${V_b}$ influences the result of Eq. (1)1. In this study, ${V_b}$ was set at 21, which is suitable for various sizes of mask.

When using algorithms based on zero-order interference fringe, the intensity values associated with fringes were indistinguishable from those associated with uniform surfaces blemishes. This is because in Fig. 9(b), the fringe on the uniform blemishes is smaller and intact due to uniform scattering and the maximum intensity of pixel D occurs in the 220th image. Thus, as indicated by the red line in Fig. 8(a), pixel D in the uniform blemishes was indistinguishable under the zero-order interference scheme.

 figure: Fig. 9.

Fig. 9. (a) 0th image (z=0) in Fig. 7(b) and cross-section passing through pixel D; (b) 220th image (z=220) in Fig. 7(d) and cross-section passing through pixel D; (b) 359th image (z=359) in Fig. 7(f) and cross-section passing through pixel D.

Download Full Size | PPT Slide | PDF

Figure 10(a) presents the 3D profile obtained by applying the proposed algorithm 360,000 times (400×900). Figure 10(b) presents the 3D profile obtained using the algorithm based on zero-order interference fringe. Again, in Fig. 10(a) and Fig. 10(b), the proposed algorithm outperformed the zero-order interference scheme in cases of surfaces with uniform blemishes. The proposed algorithm took 1836 seconds to create the 3D profile while the other algorithm took 29 seconds. While the proposed algorithm was slower, both algorithms took less than 0.05 seconds to find one focal pixel, which is acceptable.

 figure: Fig. 10.

Fig. 10. (a) 3D profile obtained by running proposed algorithm through 360,000 iterations (i.e., 400×900) (note region with uniform surfaces blemishes has been reconstructed); (b) 3D profile obtained using zero-order interference fringe algorithm, wherein blemished region could not be reconstructed.

Download Full Size | PPT Slide | PDF

4.1.3 Parameter ${C_b}$ in case of normal light

In Eq. (1)1, the numerator (${f_1}$) and denominator (${f_2}$) are related to the function used to differentiate between the surface and blemishes$\; ({{C_b}} )$. The distribution of the surface (shown in Fig. 7(a) and Fig. 8(c)) is normal. The distribution of the blemishes (shown in Fig. 8(b)$)$ is abnormal. Therefore, a wide range of values can be selected for ${C_b}$. Figure 11(a)-(c) present 2D profiles respectively derived using ${C_b}$=7, 25, 57, which is the same cross-section passing through pixels C and D (shown in Fig. 8(a) with ${C_b}$=15). These four figures clearly illustrate the efficacy with which the proposed scheme distinguishes between the surface and blemishes over a wide range of ${C_b}$ values. As shown in Fig. 11(a), ${C_b}$ = 7 was too small, which resulted in errors on Surface A and Surface B. As shown in Fig. 11(c), ${C_b}$=57 was too large, which similarly resulted in large distortions on Surface A and Surface B. Generally speaking, parameter ${C_b}$ should be set according to the resolution of the camera and the resolution of the experimental set-up (for example, 10X Mirau). A fixed value for ${C_b}$ can be used for different samples. In this study, ${C_b}$ was set at 15.

 figure: Fig. 11.

Fig. 11. 2D profiles rendered in the same reconstruction direction as Fig. 8(a) with (a)${\; }{C_b} = 7$, (b)${\; }{C_b} = 25$, and (c)${\; }{C_b} = 57$.

Download Full Size | PPT Slide | PDF

4.2 Focusing accuracy in case of strong light

Here, we assess the performance of the proposed algorithm when applied to a sample with non-uniform blemishes using a stronger light source in the experimental set-up shown in Fig. 1 and compare it with the performance of an algorithm based on zero-order interference fringe. A total of 360 images (400×900 px) were captured using the parameters listed in Subsection 4.1.1. A 9×9 px mask centered at pixel C (200, 600) was used to determine the height of Surface B. The distance between any two images was 0.02 μm. Figure 12(a) presents the distribution of focus-inflection points associated with the focal position. We determined that the focal position corresponded to the 150th image (i.e., ${M_2}^{\prime}$) with the lowest ${N_s}({{M_2}^{\prime}} )$, as shown in Fig. 12(d). The two neighboring focus-inflection points corresponded to the 56th image (${M_1}^{\prime}$) and 256th image (${M_3}^{\prime}$), as respectively shown in Fig. 12(c) and Fig. 12(e). The stronger light source exaggerated the variations in the intensity of the light reflected from the sample surface. Thus, the values of ${N_s}(\textrm{z} )$ in Fig. 12(b)(f) are higher than the values of ${N_s}(\textrm{z} )$ in Fig. 7(b)(f). Moreover, as shown in the cross-section in Fig. 12(d), the strong light source produced distorted fringes. The strong light source also made the intensity values on the region with the non-uniform blemishes easily distinguishable due to the non-uniform scattering (from the increased diffuse reflection), as shown in Fig. 13(a)(b)(c). Figure 13(b) is the cross-section passing through pixel D on the 150th image shown in Fig. 12(d). Similarly, Fig. 13(a) and Fig. 13(c) are the cross-sections passing through pixel D on the 0th image and the 359th image shown in Fig. 12(b) and Fig. 12(f), respectively. Compared to the uniform blemishes and light source in Fig. 9(a) without the fringes, the non-uniform blemishes and the stronger light source in Fig. 13(a) increase variations in intensity. Moreover, compared to Fig. 9(b), there are more variations in Fig. 13(b). Variations in the fringe occur at the maximum intensity on the defocal plane. Figure 14(a) presents the 2D profiles obtained using the proposed algorithm and the algorithm based on zero-order interference fringe with 900 iterations. This figure shows that when using a normal light source, it is possible to derive an accurate 2D profile, but uniform blemishes are not rendered clearly, as indicated by the red line in Fig. 8(a). When using a strong light source, non-uniform blemishes are rendered clearly, but the 2D profile is easily distorted, as indicated by the red line in Fig. 14(a). By contrast, the proposed algorithm produces roughly the same results for both normal and strong light sources, as indicated by the clear rendering of blemishes in Fig. 8(a) and Fig. 14(a). Specifically, pixel D in Fig. 14(a) and Fig. 14(b) is located at (200, 785), corresponding to a non-uniform blemish on Surface B (focal plane = 325th image). Pixel D in Fig. 8(a) and Fig. 8(b) is located at (200, 630), corresponding to a uniform blemish on Surface A (focal plane = 348th image). Generally speaking, in Fig. 14(a), for the algorithm based on zero-order interference fringe, the non-uniform blemishes on Surface B are higher than Surface A due to the balance limitation of the intensities between the surfaces and the blemishes. This balance limitation means that the images of blemish intensity will be clearer when the input light is stronger. However, too strong an input light will likely lead to overexposure of the images of Surface A (i.e., the high region) and those of Surface B (i.e., the low region). For the proposed algorithm on the other hand, both of blemishes on Surface B (325th image) and Surface A (348th image) are close to the upper limit of the 359th image. This is a clear demonstration that the proposed algorithm is able to deal effectively with surface blemishes under a variety of light sources.

 figure: Fig. 12.

Fig. 12. Focusing using fringes produced using a Mirau objective lens under a strong light source: (a) Results from Step 3; (b) 0th image (z=0) and cross-section passing through pixel C; (c) 56th image with focus-inflection point z=${M_1}^{\prime}$=56 and the cross-section indicating the appearance of fringes with a higher value of ${N_s}({{M_1}^{\prime}} )$; (d) 150th image with focus-inflection point z=${M_2}^{\prime}$=150, and cross-section presenting smooth fringes with minimum value of ${N_s}({{M_2}^{\prime}} )$; (e) 256th image with focus-inflection point z=${M_3}^{\prime}$=256, and cross-section indicating the disappearance of fringe with higher value of ${N_s}({{M_3}^{\prime}} )$; (f) 359th image (z=359) and cross-section passing through pixel C.

Download Full Size | PPT Slide | PDF

 figure: Fig. 13.

Fig. 13. (a) 0th image (z=0) in Fig. 12(b) and cross-section passing through pixel D; (b) 150th image (z=150) in Fig. 12(d) and cross-section passing through pixel D; (b) 359th image (z=359) in Fig. 12(f) and cross-section passing through pixel D.

Download Full Size | PPT Slide | PDF

 figure: Fig. 14.

Fig. 14. (a) 2D profile obtained using the proposed algorithm (900 iterations) and algorithm based on zero-order interference fringe. Pixel C is located in an area corresponding to a smooth surface and pixel D is located in the region of a non-uniform blemish; (b) Focus-inflection points of pixel D obtained using the proposed algorithm.

Download Full Size | PPT Slide | PDF

4.3 Focusing accuracy in case of filtered light (632nm)

Here, we used a quartz step sample and a 632-nm filter (CWL = 632 nm, FWHM = 10 nm, Minimum Transmission ≧ 45%, Blocking Wavelength Range 200-1200 nm) to reduce a light source of multiple wavelengths to a single wavelength (632 nm). The optical element of a 632-nm filter was put between Lens_1 and beam splitter_1 in Fig. 1. A total of 360 images (400×900 px) were captured. In each image, the mask over pixel C (91×3 px) was centered at (200, 450) for use in locating the focal point on Surface B. The distance between any two images was 0.02 μm. As shown in Fig. 15(a), the proposed algorithm identified two focus-inflection points at $\textrm{z} = {M_1}^{\prime}$ = 165th and $\textrm{z} = {M_2}^{\prime}$ = 348th. We randomly selected the 0th image (Fig. 15(b)), the 49th image (Fig. 15(c)), the 299th image (Fig. 15(e)), and the 359th image (Fig. 15(f)) to illustrate the functions of the proposed algorithm. The 632-nm filter (with FWHM 10 nm) reduced variations in the intensity of the light reflected from the sample surface, with the result that the five fringes in Fig. 15(b)-(f) are much smoother than those in Fig. 7(b)-(f). Thus, the smoother fringes reduced the ${N_s}(z )$ values; however, the proposed algorithm was still able to locate the focal point in the pixel C, as indicated in Fig. 15(d) where the intensity on the focal plane of the 165th image far exceeded the black line. In Fig. 15(b)(c)(e)(f), the intensities out of the focal plane are below the black line, such that ${N_s}(z )$ could still be used to identify fringes of higher intensity.

 figure: Fig. 15.

Fig. 15. (a) Results of Step 3; (b) the 0th image (z=0), and cross-section passing through pixel C, showing smoother fringes resulting from the 632nm filter; (c) 49th image and cross-section; (d) 165th image with focus-inflection point z=${M_1}^{\prime}$=165, and cross-section showing smooth fringes and the lowest ${N_s}({{M_1}^{\prime}} )$ value; (e) 299th image and cross-section; (f) 359th image (z=359) and cross-section.

Download Full Size | PPT Slide | PDF

4.4 Case with no fringes using a normal objective lens

In subsections 4.1, 4.2, and 4.3, the sample surfaces were smooth and the Mirau objective was used. The next case has no fringes due to the use of a normal 10x objective lens to replace the Mirau objective in the set-up presented in Fig. 1. Moreover, an aluminum sample with a rough surface with a step height of approximate 100 μm was used to demonstrate the proposed algorithm was in good agreement with the focal points identified using the autofocus algorithms listed in Table 2.

Tables Icon

Table 2. Results of various autofocus algorithms in identifying focal point at pixel C

In Fig. 16, we captured 700 images (400×400 px), which were labeled z=0th-699th. In each image, the mask over pixel C (91×91 px) was centered at (200, 200) for use in locating the focal point on Surface A. The distance between any given two images was 1 μm. Figure 16(a) presents the distribution of focus-inflection positions at pixel C obtained using the proposed algorithm. The focal plane corresponded to the 506th image (i.e., ${M_6}^{\prime}$) associated with the highest ${N_s}({{M_6}^{\prime}} )$, as shown in Fig. 16(d). Two neighboring focus-inflection points corresponded to the 435th (i.e., ${M_5}^{\prime}$) and 585th (i.e., ${M_7}^{\prime}$) images, as respectively shown in Fig. 16(c) and Fig. 16(e). In Fig. 16(c)(d)(e), under the defocal blurring, ${N_s}({{M_5}^{\prime}} )$ and ${N_s}({{M_7}^{\prime}} )$ were smaller than ${N_s}({{M_6}^{\prime}} )$. Equations (1)-10 were applied to create Fig. 16(a). However, in Eq. (1)1, the function of arg min is replaced by arg max in order to find the maximum value of ${N_s}({{M_6}^{\prime}} )$, as shown in Table 2. Further details related to this case remains the subject of future work.

 figure: Fig. 16.

Fig. 16. Focusing on a rough surface without fringes using a normal objective lens: (a) Results of Step3; (b) 0th image (z=0) and cross-section passing through pixel C; (c) 435th image corresponding to focus-inflection position z=${M_5}^{\prime}$=435 and cross-section indicating the appearance of fringes with a higher value of ${N_s}({{M_5}^{\prime}} )$; (d) 506th image corresponding to focus-inflection position z=${M_6}^{\prime}$=506 and cross-section with the maximum value of ${N_s}({{M_6}^{\prime}} )$; (e) 585th image corresponding to focus-inflection position z=${M_7}^{\prime}$=585 and cross-section indicating the disappearance of fringes with a higher value of ${N_s}({{M_7}^{\prime}} )$; (f) 699th image (z=699), and cross-section passing through pixel C.

Download Full Size | PPT Slide | PDF

For the sake of comparison, a number of well-known auto-focusing algorithms were applied to 700 images with the same pixel C (200, 200) and the same 91×91 mask, the results of which are shown in Fig. 17(a) and Table 2, where I indicates the intensity value, $({i,j} )$=C (200, 200), $i = {i_1}\sim {i_2}$, and $j = {j_1}\sim {j_2}$ indicate the range of mask sizes (see Eq. (1)). The Brenner gradient algorithm and Square gradient algorithm determined that the focal point at C (200, 200) corresponded to the 506th image, which matches the results obtained using the proposed algorithm. The image power algorithm linked the focal point with the 500th image, whereas the energy Laplace algorithm linked it to the 513th image. The maximum pixel intensity algorithm was an outlier, linking the focal point to the 574th image, due presumably to the roughness of the sample surface. The algorithm based on zero-order interference fringe uses the same equation as the maximum pixel intensity algorithm; however, it does so in searching for pixels associated with the fringe of highest intensity. We can see in Fig. 17(a) that again the re-scanning range ranged from the 0th-699th images due to the lack of inflection points in that range. Using the focus-inflection position (${M_6}^{\prime}$) as well as two neighboring positions (${M_5}^{\prime}$ and ${M_7}^{\prime}$) reduced the re-scanning range of the proposed algorithm from 0th-700th to 436th-586th.

 figure: Fig. 17.

Fig. 17. (a) Results of various autofocus algoritms in identifying the focal point at pixel C (200,200) using the same 700 images in Fig. 16; (b) 3D profile obtained by running proposed algorithm through 160,000 iterations (i.e., 400×400).

Download Full Size | PPT Slide | PDF

For the other sake comparison with the 21×9 mask (from the original 91×91 mask), the same 700 images are calculated by the proposed algorithm and a number of well-known auto-focusing algorithms, again. The proposed algorithm, Square gradient, and Brenner gradient algorithms determined that the focal point as C (200, 200) corresponded to the 509th, 507th, and 507th images, respectively. Other algorithms fails due to the smaller 21×9 mask. Figure 17(b) presents the 3D profile obtained by applying the proposed algorithm with the smaller 21×9 mask 160,000 times (400×400). And, the proposed algorithm took 1907 seconds to create the 3D profile. The proposed algorithm took less than 0.05 seconds to find one focal pixel, which is acceptable. Therefore, the proposed algorithm is robust with the different mask size.

4.5 Summary

In Subsection 4.1.1 (fringe case; uniform blemishes), the focal point at pixel C corresponding to the lowest ${N_s}({{M_6}^{\prime}} )$ matches the focal point identified by the algorithm based on zero-order interference fringe. The proposed algorithm can also be used to find the two neighboring inflection points associated with the appearance and disappearance of fringes, thereby making it possible to reduce the re-scanning range to within those bounds. In Subsection 4.1.2 (fringe case with uniform blemishes), the accuracy of the proposed algorithm (0.06 μm) was comparable to that of the algorithm based on zero-order interference fringe (0.07 μm). In Subsection 4.1.3 (fringe case with uniform blemishes), a wide range of values was set for parameter ${C_b}$ because ${N_s}(z )$ is higher for blemishes and lower for smooth focal planes. In other words, the distribution of ${f_1}(z )$ and ${f_2}(z )$ in the region of uniform blemishes (Fig. 8(b)) differs from that on the surface (Fig. 7(a)). This makes it possible to differentiate between the two using Eq. (1)1, as indicated by the re-constructed 2D and 3D profiles respectively shown in Fig. 8(a) and Fig. 10(a). Here again, the proposed algorithm is able to reduce the re-scanning range based on the two neighboring inflection points. The proposed algorithm performed well in cases involving non-uniform blemishes or an excessively strong light source (Subsection 4.2). In Subsection 4.3, it also performed well when a 632-nm filter (with FWHM 10 nm) was used to reduce variations in the intensity of the light reflected from the sample surface. Although the values of ${N_s}(z )$ reduced due to the filter, ${N_s}(z )$ could still be used to identify fringes of higher intensity. In Subsection 4.4 (case without fringes), the focal point at pixel C corresponding to the highest ${N_s}({{M_6}^{\prime}} )$ was in good agreement with the focal points identified using the autofocus algorithms listed in Table 2 (for the sample with a rough surface).

5. Conclusions

This study developed a novel image-based autofocus algorithm, which uses the number of slope variations (i.e., ${N_s}(z )$) to identify the focal plane. When applied to interferometric microscopes, the weaker intensity of the ideal fringes and variations in the intensity of the light reflected from the sample surface increased ${N_s}(z )$ values. By contrast, the maximum intensity of the ideal fringes effectively reduced these variations, and the lowest ${N_s}(z )$ occurred along the focal plane. The identification of focus-inflection points makes it possible to reduce the re-scanning range to enhance computational efficiency. In experiments, the proposed algorithm performed at least as well as existing autofocus algorithms, resulting in accuracy of 0.06 μm, which is comparable to the schemes based on zero-order interference fringe (0.07 μm). However, unlike the zero-order interference fringe algorithm, the proposed scheme is able to identify the focal point in regions with uniform surfaces blemishes, strong (non-uniform) blemishes under normal lighting conditions as well as excessively strong light sources. The proposed algorithm generally runs through a large number of iterations (i.e., 400-360,000), resulting in a large number of focal points by which to plot 2D or 3D profiles that are robust and highly consistent. The proposed algorithm was also effective in cases of rough samples without fringes, the results of which were in good agreement with the focal points identified using the autofocus algorithms listed in Table 2.

Funding

NARLabs I-DREAM Grant for International Cooperation with imec; Ministry of Science and Technology, Taiwan (MOST 107-2221-E-492-024-MY3), Hsinchu Science Park Bureau, Ministry of Science and Technology, Taiwan (SIPA 110AT09B).

Disclosures

JING-FENG WENG*, GUO-HAO LU, CHUN-JEN WENG, YU-HSIN LIN, and CHAO-FENG LIU: Taiwan Instrument Research Institute, National Applied Research Laboratories.

ROBBIE VINCKE, HSIAO-CHUN TING, and TING-TING CHANG: imec Taiwan Co.

The authors, JING-FENG WENG*, GUO-HAO LU, CHUN-JEN WENG, YU-HSIN LIN, CHAO-FENG LIU, ROBBIE VINCKE, HSIAO-CHUN TING, and TING-TING CHANG, declare no conflicts of interest.

References

1. S. A. Lee, X. Ou, J. Eugene Lee, and C. Yang, “Chip-scale fluorescence microscope based on a silo-filter complementary metal-oxide semiconductor image sensor,” Opt. Lett. 38(11), 1817–1819 (2013). [CrossRef]  

2. L. Huang and J.-X. Cheng, “Nonlinear Optical Microscopy of Single Nanostructures,” Annu. Rev. Mater. Res. 43(1), 213–236 (2013). [CrossRef]  

3. S. Schaefer, S. A. Boehm, and K. J. Chau, “Automated, portable, low-cost bright-field and fluorescence microscope with autofocus and autoscanning capabilities,” Appl. Opt. 51(14), 2581–2588 (2012). [CrossRef]  

4. Young-Duk Kim, Myoung-Ki Ahn, and Dae-Gab Gweon, “Design and Fabrication of a Multi-modal Confocal Endo-Microscope for Biomedical Imaging,” J. Opt. Soc. Korea 15(3), 300–304 (2011). [CrossRef]  

5. M. Anthonisen, Y. Zhang, M. Hussain Sangji, and P. Grütter, “Quantifying bio-filament morphology below the diffraction limit of an optical microscope using out-of-focus images,” Appl. Opt. 59(9), 2914–2923 (2020). [CrossRef]  

6. G. E. Nevskaya and M. G. Tomilin, “Adaptive lenses based on liquid crystals,” J. Opt. Technol. 75(9), 563–573 (2008). [CrossRef]  

7. J. Lee, J. Lee, and Y. H. Won, “Nonmechanical three-dimensional beam steering using electrowetting-based liquid lens and liquid prism,” Opt. Express 27(25), 36757–36766 (2019). [CrossRef]  

8. Zengqian Ding, Chinhua Wang, Zhixiong Hu, Zhenggang Cao, Zhen Zhou, Xiangyu Chen, Hongyu Chen, and Wen Qiao, “Surface profiling of an aspherical liquid lens with a varied thickness membrane,” Opt. Express 25(4), 3122–3132 (2017). [CrossRef]  

9. H.-M. Son, M. Y. Kim, and Y.-J. Lee, “Tunable-focus liquid lens system controlled by antagonistic winding-type SMA actuator,” Opt. Express 17(16), 14339–14350 (2009). [CrossRef]  

10. E. Aytac-Kipergil, E. J. Alles, H. C. Pauw, J. Karia, S. Noimark, and A. E. Desjardins, “Versatile and scalable fabrication method for laser-generated focused ultrasound transducers,” Opt. Lett. 44(24), 6005–6008 (2019). [CrossRef]  

11. K.-H. Kim, S.-Y. Lee, and S. Kim, “A mobile auto-focus actuator based on a rotary VCM with the zero holding current,” Opt. Express 17(7), 5891–5896 (2009). [CrossRef]  

12. Thomas Chaigne, Jérôme Gateau, Ori Katz, Emmanuel Bossy, and Sylvain Gigan, “Light focusing and two-dimensional imaging through scattering media using the photoacoustic transmission matrix with an ultrasound array,” Opt. Lett. 39(9), 2664–2667 (2014). [CrossRef]  

13. C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57(34), F44–F49 (2018). [CrossRef]  

14. Jie Cao, Yang Cheng, Peng Wang, Kaiyu Zhang, Yuqing Xiao, Kun Li, Yuxin Peng, and Qun Hao, “Autofocusing imaging system based on laser ranging and a retina-like sample,” Appl. Opt. 56(22), 6222–6229 (2017). [CrossRef]  

15. Y. Fujishiro, T. Furukawa, and S. Maruo, “Simple autofocusing method by image processing using transmission images for large-scale two-photon lithography,” Opt. Express 28(8), 12342–12351 (2020). [CrossRef]  

16. Ming Tang, Chao Liu, and Xiao Ping Wang, “Autofocusing and image fusion for multi-focus plankton imaging by digital holographic microscopy,” Appl. Opt. 59(2), 333–345 (2020). [CrossRef]  

17. Z. Yan, G. Chen, W. Xu, C. Yang, and Y. Lu, “Study of an image autofocus method based on power threshold function wavelet reconstruction and a quality evaluation algorithm,” Appl. Opt. 57(33), 9714–9721 (2018). [CrossRef]  

18. Min Seok Oh, Hong Jin Kong, Tae Hoon Kim, and Sung Eun Jo, “Autofocus technique for three-dimensional imaging, direct-detection laser radar using Geiger-mode avalanche photodiode focal-plane array,” Opt. Lett. 35(24), 4214–4216 (2010). [CrossRef]  

19. Y. Tian, “Autofocus using image phase congruency,” Opt. Express 19(1), 261–270 (2011). [CrossRef]  

20. Grégoire Saerens, Lukas Lang, Claude Renaut, Flavia Timpu, Viola Vogler-Neuling, Christophe Durand, Maria Tchernycheva, Igor Shtrom, Alexey Bouravleuv, Rachel Grange, and Maria Timofeeva, “Image-based autofocusing system for nonlinear optical microscopy with broad spectral tuning,” Opt. Express 27(14), 19915–19930 (2019). [CrossRef]  

21. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008). [CrossRef]  

22. Meng Lyu, Caojin Yuan, Dayan Li, and Guohai Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56(13), F152–F157 (2017). [CrossRef]  

23. P. Yang, S. Fang, X. Zhu, M. Komori, and A. Kubo, “Autofocus algorithm of interferogram based on object image and registration technology,” Appl. Opt. 52(36), 8723–8731 (2013). [CrossRef]  

24. Patrick Sandoz and Gilbert Tribillon, “Profilometry by Zero-order Interference Fringe Identification,” J. Mod. Opt. 40(9), 1691–1700 (1993). [CrossRef]  

25. D. Vollath, “Automatic focusing by correlative methods,” J. Microsc. 147(3), 279–288 (1987). [CrossRef]  

26. Eugene Hecht, Optics, International Edition (4th. Ed), Addison Wesley, Chapter 9, 2002.

References

  • View by:

  1. S. A. Lee, X. Ou, J. Eugene Lee, and C. Yang, “Chip-scale fluorescence microscope based on a silo-filter complementary metal-oxide semiconductor image sensor,” Opt. Lett. 38(11), 1817–1819 (2013).
    [Crossref]
  2. L. Huang and J.-X. Cheng, “Nonlinear Optical Microscopy of Single Nanostructures,” Annu. Rev. Mater. Res. 43(1), 213–236 (2013).
    [Crossref]
  3. S. Schaefer, S. A. Boehm, and K. J. Chau, “Automated, portable, low-cost bright-field and fluorescence microscope with autofocus and autoscanning capabilities,” Appl. Opt. 51(14), 2581–2588 (2012).
    [Crossref]
  4. Young-Duk Kim, Myoung-Ki Ahn, and Dae-Gab Gweon, “Design and Fabrication of a Multi-modal Confocal Endo-Microscope for Biomedical Imaging,” J. Opt. Soc. Korea 15(3), 300–304 (2011).
    [Crossref]
  5. M. Anthonisen, Y. Zhang, M. Hussain Sangji, and P. Grütter, “Quantifying bio-filament morphology below the diffraction limit of an optical microscope using out-of-focus images,” Appl. Opt. 59(9), 2914–2923 (2020).
    [Crossref]
  6. G. E. Nevskaya and M. G. Tomilin, “Adaptive lenses based on liquid crystals,” J. Opt. Technol. 75(9), 563–573 (2008).
    [Crossref]
  7. J. Lee, J. Lee, and Y. H. Won, “Nonmechanical three-dimensional beam steering using electrowetting-based liquid lens and liquid prism,” Opt. Express 27(25), 36757–36766 (2019).
    [Crossref]
  8. Zengqian Ding, Chinhua Wang, Zhixiong Hu, Zhenggang Cao, Zhen Zhou, Xiangyu Chen, Hongyu Chen, and Wen Qiao, “Surface profiling of an aspherical liquid lens with a varied thickness membrane,” Opt. Express 25(4), 3122–3132 (2017).
    [Crossref]
  9. H.-M. Son, M. Y. Kim, and Y.-J. Lee, “Tunable-focus liquid lens system controlled by antagonistic winding-type SMA actuator,” Opt. Express 17(16), 14339–14350 (2009).
    [Crossref]
  10. E. Aytac-Kipergil, E. J. Alles, H. C. Pauw, J. Karia, S. Noimark, and A. E. Desjardins, “Versatile and scalable fabrication method for laser-generated focused ultrasound transducers,” Opt. Lett. 44(24), 6005–6008 (2019).
    [Crossref]
  11. K.-H. Kim, S.-Y. Lee, and S. Kim, “A mobile auto-focus actuator based on a rotary VCM with the zero holding current,” Opt. Express 17(7), 5891–5896 (2009).
    [Crossref]
  12. Thomas Chaigne, Jérôme Gateau, Ori Katz, Emmanuel Bossy, and Sylvain Gigan, “Light focusing and two-dimensional imaging through scattering media using the photoacoustic transmission matrix with an ultrasound array,” Opt. Lett. 39(9), 2664–2667 (2014).
    [Crossref]
  13. C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57(34), F44–F49 (2018).
    [Crossref]
  14. Jie Cao, Yang Cheng, Peng Wang, Kaiyu Zhang, Yuqing Xiao, Kun Li, Yuxin Peng, and Qun Hao, “Autofocusing imaging system based on laser ranging and a retina-like sample,” Appl. Opt. 56(22), 6222–6229 (2017).
    [Crossref]
  15. Y. Fujishiro, T. Furukawa, and S. Maruo, “Simple autofocusing method by image processing using transmission images for large-scale two-photon lithography,” Opt. Express 28(8), 12342–12351 (2020).
    [Crossref]
  16. Ming Tang, Chao Liu, and Xiao Ping Wang, “Autofocusing and image fusion for multi-focus plankton imaging by digital holographic microscopy,” Appl. Opt. 59(2), 333–345 (2020).
    [Crossref]
  17. Z. Yan, G. Chen, W. Xu, C. Yang, and Y. Lu, “Study of an image autofocus method based on power threshold function wavelet reconstruction and a quality evaluation algorithm,” Appl. Opt. 57(33), 9714–9721 (2018).
    [Crossref]
  18. Min Seok Oh, Hong Jin Kong, Tae Hoon Kim, and Sung Eun Jo, “Autofocus technique for three-dimensional imaging, direct-detection laser radar using Geiger-mode avalanche photodiode focal-plane array,” Opt. Lett. 35(24), 4214–4216 (2010).
    [Crossref]
  19. Y. Tian, “Autofocus using image phase congruency,” Opt. Express 19(1), 261–270 (2011).
    [Crossref]
  20. Grégoire Saerens, Lukas Lang, Claude Renaut, Flavia Timpu, Viola Vogler-Neuling, Christophe Durand, Maria Tchernycheva, Igor Shtrom, Alexey Bouravleuv, Rachel Grange, and Maria Timofeeva, “Image-based autofocusing system for nonlinear optical microscopy with broad spectral tuning,” Opt. Express 27(14), 19915–19930 (2019).
    [Crossref]
  21. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008).
    [Crossref]
  22. Meng Lyu, Caojin Yuan, Dayan Li, and Guohai Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56(13), F152–F157 (2017).
    [Crossref]
  23. P. Yang, S. Fang, X. Zhu, M. Komori, and A. Kubo, “Autofocus algorithm of interferogram based on object image and registration technology,” Appl. Opt. 52(36), 8723–8731 (2013).
    [Crossref]
  24. Patrick Sandoz and Gilbert Tribillon, “Profilometry by Zero-order Interference Fringe Identification,” J. Mod. Opt. 40(9), 1691–1700 (1993).
    [Crossref]
  25. D. Vollath, “Automatic focusing by correlative methods,” J. Microsc. 147(3), 279–288 (1987).
    [Crossref]
  26. Eugene Hecht, Optics, International Edition (4th. Ed), Addison Wesley, Chapter 9, 2002.

2020 (3)

2019 (3)

2018 (2)

2017 (3)

2014 (1)

2013 (3)

2012 (1)

2011 (2)

2010 (1)

2009 (2)

2008 (2)

1993 (1)

Patrick Sandoz and Gilbert Tribillon, “Profilometry by Zero-order Interference Fringe Identification,” J. Mod. Opt. 40(9), 1691–1700 (1993).
[Crossref]

1987 (1)

D. Vollath, “Automatic focusing by correlative methods,” J. Microsc. 147(3), 279–288 (1987).
[Crossref]

Ahn, Myoung-Ki

Alles, E. J.

Anthonisen, M.

Aytac-Kipergil, E.

Boehm, S. A.

Bossy, Emmanuel

Bouravleuv, Alexey

Cao, Jie

Cao, Zhenggang

Chaigne, Thomas

Chau, K. J.

Chen, G.

Chen, Hongyu

Chen, Xiangyu

Cheng, J.-X.

L. Huang and J.-X. Cheng, “Nonlinear Optical Microscopy of Single Nanostructures,” Annu. Rev. Mater. Res. 43(1), 213–236 (2013).
[Crossref]

Cheng, Yang

Corwin, A. D.

Desjardins, A. E.

Ding, Zengqian

Dixon, E. L.

Durand, Christophe

Eugene Lee, J.

Fang, S.

Filkins, R. J.

Fujishiro, Y.

Furukawa, T.

Gateau, Jérôme

Gigan, Sylvain

Grange, Rachel

Grütter, P.

Guo, C.

Guo, X.

Gweon, Dae-Gab

Hao, Qun

Hecht, Eugene

Eugene Hecht, Optics, International Edition (4th. Ed), Addison Wesley, Chapter 9, 2002.

Hu, Zhixiong

Huang, L.

L. Huang and J.-X. Cheng, “Nonlinear Optical Microscopy of Single Nanostructures,” Annu. Rev. Mater. Res. 43(1), 213–236 (2013).
[Crossref]

Hussain Sangji, M.

Jo, Sung Eun

Karia, J.

Katz, Ori

Kenny, K. B.

Kim, K.-H.

Kim, M. Y.

Kim, S.

Kim, Tae Hoon

Kim, Young-Duk

Komori, M.

Kong, Hong Jin

Kubo, A.

Lang, Lukas

Lee, J.

Lee, S. A.

Lee, S.-Y.

Lee, Y.-J.

Li, Dayan

Li, Kun

Li, W.

Liu, Chao

Lu, Y.

Lyu, Meng

Ma, Z.

Maruo, S.

Nevskaya, G. E.

Noimark, S.

Oh, Min Seok

Ou, X.

Pauw, H. C.

Peng, Yuxin

Qi, X.

Qiao, Wen

Renaut, Claude

Saerens, Grégoire

Sandoz, Patrick

Patrick Sandoz and Gilbert Tribillon, “Profilometry by Zero-order Interference Fringe Identification,” J. Mod. Opt. 40(9), 1691–1700 (1993).
[Crossref]

Schaefer, S.

Shtrom, Igor

Situ, Guohai

Son, H.-M.

Tang, Ming

Tasimi, K.

Tchernycheva, Maria

Tian, Y.

Timofeeva, Maria

Timpu, Flavia

Tomilin, M. G.

Tribillon, Gilbert

Patrick Sandoz and Gilbert Tribillon, “Profilometry by Zero-order Interference Fringe Identification,” J. Mod. Opt. 40(9), 1691–1700 (1993).
[Crossref]

Vogler-Neuling, Viola

Vollath, D.

D. Vollath, “Automatic focusing by correlative methods,” J. Microsc. 147(3), 279–288 (1987).
[Crossref]

Wang, Chinhua

Wang, Peng

Wang, Xiao Ping

Won, Y. H.

Xiao, Yuqing

Xu, W.

Yan, Z.

Yang, C.

Yang, P.

Yazdanfar, S.

Yuan, Caojin

Zhang, Kaiyu

Zhang, Y.

Zhao, Q.

Zhou, Zhen

Zhu, X.

Annu. Rev. Mater. Res. (1)

L. Huang and J.-X. Cheng, “Nonlinear Optical Microscopy of Single Nanostructures,” Annu. Rev. Mater. Res. 43(1), 213–236 (2013).
[Crossref]

Appl. Opt. (8)

S. Schaefer, S. A. Boehm, and K. J. Chau, “Automated, portable, low-cost bright-field and fluorescence microscope with autofocus and autoscanning capabilities,” Appl. Opt. 51(14), 2581–2588 (2012).
[Crossref]

M. Anthonisen, Y. Zhang, M. Hussain Sangji, and P. Grütter, “Quantifying bio-filament morphology below the diffraction limit of an optical microscope using out-of-focus images,” Appl. Opt. 59(9), 2914–2923 (2020).
[Crossref]

C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57(34), F44–F49 (2018).
[Crossref]

Jie Cao, Yang Cheng, Peng Wang, Kaiyu Zhang, Yuqing Xiao, Kun Li, Yuxin Peng, and Qun Hao, “Autofocusing imaging system based on laser ranging and a retina-like sample,” Appl. Opt. 56(22), 6222–6229 (2017).
[Crossref]

Ming Tang, Chao Liu, and Xiao Ping Wang, “Autofocusing and image fusion for multi-focus plankton imaging by digital holographic microscopy,” Appl. Opt. 59(2), 333–345 (2020).
[Crossref]

Z. Yan, G. Chen, W. Xu, C. Yang, and Y. Lu, “Study of an image autofocus method based on power threshold function wavelet reconstruction and a quality evaluation algorithm,” Appl. Opt. 57(33), 9714–9721 (2018).
[Crossref]

Meng Lyu, Caojin Yuan, Dayan Li, and Guohai Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56(13), F152–F157 (2017).
[Crossref]

P. Yang, S. Fang, X. Zhu, M. Komori, and A. Kubo, “Autofocus algorithm of interferogram based on object image and registration technology,” Appl. Opt. 52(36), 8723–8731 (2013).
[Crossref]

J. Microsc. (1)

D. Vollath, “Automatic focusing by correlative methods,” J. Microsc. 147(3), 279–288 (1987).
[Crossref]

J. Mod. Opt. (1)

Patrick Sandoz and Gilbert Tribillon, “Profilometry by Zero-order Interference Fringe Identification,” J. Mod. Opt. 40(9), 1691–1700 (1993).
[Crossref]

J. Opt. Soc. Korea (1)

J. Opt. Technol. (1)

Opt. Express (8)

J. Lee, J. Lee, and Y. H. Won, “Nonmechanical three-dimensional beam steering using electrowetting-based liquid lens and liquid prism,” Opt. Express 27(25), 36757–36766 (2019).
[Crossref]

Zengqian Ding, Chinhua Wang, Zhixiong Hu, Zhenggang Cao, Zhen Zhou, Xiangyu Chen, Hongyu Chen, and Wen Qiao, “Surface profiling of an aspherical liquid lens with a varied thickness membrane,” Opt. Express 25(4), 3122–3132 (2017).
[Crossref]

H.-M. Son, M. Y. Kim, and Y.-J. Lee, “Tunable-focus liquid lens system controlled by antagonistic winding-type SMA actuator,” Opt. Express 17(16), 14339–14350 (2009).
[Crossref]

K.-H. Kim, S.-Y. Lee, and S. Kim, “A mobile auto-focus actuator based on a rotary VCM with the zero holding current,” Opt. Express 17(7), 5891–5896 (2009).
[Crossref]

Y. Tian, “Autofocus using image phase congruency,” Opt. Express 19(1), 261–270 (2011).
[Crossref]

Grégoire Saerens, Lukas Lang, Claude Renaut, Flavia Timpu, Viola Vogler-Neuling, Christophe Durand, Maria Tchernycheva, Igor Shtrom, Alexey Bouravleuv, Rachel Grange, and Maria Timofeeva, “Image-based autofocusing system for nonlinear optical microscopy with broad spectral tuning,” Opt. Express 27(14), 19915–19930 (2019).
[Crossref]

S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008).
[Crossref]

Y. Fujishiro, T. Furukawa, and S. Maruo, “Simple autofocusing method by image processing using transmission images for large-scale two-photon lithography,” Opt. Express 28(8), 12342–12351 (2020).
[Crossref]

Opt. Lett. (4)

Other (1)

Eugene Hecht, Optics, International Edition (4th. Ed), Addison Wesley, Chapter 9, 2002.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Experiment setup of interferometric microscope in this study.
Fig. 2.
Fig. 2. (a) Stack of M images labeled z = 0th, 1st, …, $({M - 1} )$th. At pixel C, plane ${M_{ - z}}$ indicates the appearance of fringes corresponding to a higher ${N_s}({{M_{ - z}}} )$; focal plane ${M_z}$ indicates the appearance of smooth fringes on the focal plane corresponding to the lowest ${N_s}({{M_z}} )$; plane ${M_{ + z}}$ indicates the disappearance of the fringe corresponding to a higher ${N_s}({{M_{ + z}}} )$. ; (b) Ideal smooth fringe produced from a white light source; (c) Jagged variations in the intensity of light from the sample surface (produced by diffuse reflection); (d) Appearance of the fringe produced from the corresponding ${M_{ - z}}$ images in Fig. 2(b) and Fig. 2(c); (e) Maximum intensity fringe (i.e., zero-order fringe) produced from the corresponding ${M_z}$ images in Fig. 2(b) and Fig. 2(c); (f) Disappearance of the fringe produced from the corresponding ${M_{ + z}}$ images in Fig. 2(b) and Fig. 2(c).
Fig. 3.
Fig. 3. Implementation of proposed autofocus algorithm in three steps: (a) Step 1: Determine the total number of the slope variations,${N_s}(z )$, in stack of z images, where z=1∼ $M - 1\; $images; (b) Step 2: Derive ${f_1}(z )$ and ${f_2}(z )$ from ${N_s}(z )$ in the diagram; (c) Step 3: Identify the focus-inflection points, ${M_i}^{\prime}$, from curve ${f_1}(z )$. And then, identify focal point, ${M_z}$, using Eq. (1)1 for interferometry microscopes.
Fig. 4.
Fig. 4. (a) Simulated variations in intensity of the light reflected from the sample surface in Fig. 2(c) with ${N_s}({{M_p}} )= 124$, ${N_s}({{M_{ - z}}} )= 137$, ${N_s}({{M_z}} )= 131$, and ${N_s}({{M_{ + z}}} )= 143$; (b) simulated ideal interferometric fringe of white light source in Fig. 2(b) with ${N_s}({{M_p}} )= 0$, ${N_s}({{M_{ - z}}} )= 4$, ${N_s}({{M_z}} )= 4$, and ${N_s}({{M_{ + z}}} )= 4$; (c) simulated fringe obtained from Fig. 4(a) multiplied by Fig. 4(b) with ${N_s}({{M_p}} )= 124$, ${N_s}({{M_{ - z}}} )= 20$, ${N_s}({{M_z}} )= 14$, and ${N_s}({{M_{ + z}}} )= 45$.
Fig. 5.
Fig. 5. (a) Captured image without fringes and corresponding cross-section passing through pixel C, and ${N_s}({{M_p}} )= 65$ which corresponds to ${N_s}({{M_p}} )$ in Fig. 4(c); (b) captured image with appearance of fringes and corresponding cross-section passing through pixel C, and ${N_s}({{M_{ - z}}} )= 43$ which corresponds to ${N_s}({{M_{ - z}}} )$ in Fig. 4(c); (c) captured image with maximum intensity fringe (i.e., zero-order fringe) and corresponding cross-section passing through pixel C, and ${N_s}({{M_z}} )= 18$ which corresponds to ${N_s}({{M_z}} )$ in Fig. 4(c); (d) captured image with disappearance of fringes and corresponding cross-section passing through pixel C with ${N_s}({{M_{ + z}}} )= 50$ which corresponds to ${N_s}({{M_{ + z}}} )$ in Fig. 4(c).
Fig. 6.
Fig. 6. Illustration of functions performed by proposed algorithm: (a) Step 1; (b) Step 2.
Fig. 7.
Fig. 7. Results obtained using set-up presented in Fig. 1: (a) Step 3 using Eq. (1)1; (b) 0th image (z=0), and cross-section passing through pixel C; (c) 174th image with focus-inflection point z=${M_5}^{\prime}$=174, and cross-section showing appearance of fringes with higher values for ${N_s}({{M_5}^{\prime}} );$ (d) 220th image with focus-inflection point z=${M_6}^{\prime}$=220, and cross-section showing smooth fringes with lowest value of ${N_s}({{M_6}^{\prime}} )$; (e) 359th image with focus-inflection point z=${M_7}^{\prime}$=359, and cross-section with fringes and higher values for ${N_s}({{M_7}^{\prime}} )$; (f) 359th image (z=359) and cross-section passing through pixel C.
Fig. 8.
Fig. 8. (a) Red and blue curves are 2D profiles obtained by running proposed algorithm and zero-order interference fringe algorithm through 900 iterations, where pixel C is located in a smooth surface area and pixel D is located in the region of a uniform blemish; (b) focus-inflection points at pixel D obtained using proposed algorithm; (c) focus-inflection points at pixel C obtained using proposed algorithm with modified 91×3 mask.
Fig. 9.
Fig. 9. (a) 0th image (z=0) in Fig. 7(b) and cross-section passing through pixel D; (b) 220th image (z=220) in Fig. 7(d) and cross-section passing through pixel D; (b) 359th image (z=359) in Fig. 7(f) and cross-section passing through pixel D.
Fig. 10.
Fig. 10. (a) 3D profile obtained by running proposed algorithm through 360,000 iterations (i.e., 400×900) (note region with uniform surfaces blemishes has been reconstructed); (b) 3D profile obtained using zero-order interference fringe algorithm, wherein blemished region could not be reconstructed.
Fig. 11.
Fig. 11. 2D profiles rendered in the same reconstruction direction as Fig. 8(a) with (a)${\; }{C_b} = 7$, (b)${\; }{C_b} = 25$, and (c)${\; }{C_b} = 57$.
Fig. 12.
Fig. 12. Focusing using fringes produced using a Mirau objective lens under a strong light source: (a) Results from Step 3; (b) 0th image (z=0) and cross-section passing through pixel C; (c) 56th image with focus-inflection point z=${M_1}^{\prime}$=56 and the cross-section indicating the appearance of fringes with a higher value of ${N_s}({{M_1}^{\prime}} )$; (d) 150th image with focus-inflection point z=${M_2}^{\prime}$=150, and cross-section presenting smooth fringes with minimum value of ${N_s}({{M_2}^{\prime}} )$; (e) 256th image with focus-inflection point z=${M_3}^{\prime}$=256, and cross-section indicating the disappearance of fringe with higher value of ${N_s}({{M_3}^{\prime}} )$; (f) 359th image (z=359) and cross-section passing through pixel C.
Fig. 13.
Fig. 13. (a) 0th image (z=0) in Fig. 12(b) and cross-section passing through pixel D; (b) 150th image (z=150) in Fig. 12(d) and cross-section passing through pixel D; (b) 359th image (z=359) in Fig. 12(f) and cross-section passing through pixel D.
Fig. 14.
Fig. 14. (a) 2D profile obtained using the proposed algorithm (900 iterations) and algorithm based on zero-order interference fringe. Pixel C is located in an area corresponding to a smooth surface and pixel D is located in the region of a non-uniform blemish; (b) Focus-inflection points of pixel D obtained using the proposed algorithm.
Fig. 15.
Fig. 15. (a) Results of Step 3; (b) the 0th image (z=0), and cross-section passing through pixel C, showing smoother fringes resulting from the 632nm filter; (c) 49th image and cross-section; (d) 165th image with focus-inflection point z=${M_1}^{\prime}$=165, and cross-section showing smooth fringes and the lowest ${N_s}({{M_1}^{\prime}} )$ value; (e) 299th image and cross-section; (f) 359th image (z=359) and cross-section.
Fig. 16.
Fig. 16. Focusing on a rough surface without fringes using a normal objective lens: (a) Results of Step3; (b) 0th image (z=0) and cross-section passing through pixel C; (c) 435th image corresponding to focus-inflection position z=${M_5}^{\prime}$=435 and cross-section indicating the appearance of fringes with a higher value of ${N_s}({{M_5}^{\prime}} )$; (d) 506th image corresponding to focus-inflection position z=${M_6}^{\prime}$=506 and cross-section with the maximum value of ${N_s}({{M_6}^{\prime}} )$; (e) 585th image corresponding to focus-inflection position z=${M_7}^{\prime}$=585 and cross-section indicating the disappearance of fringes with a higher value of ${N_s}({{M_7}^{\prime}} )$; (f) 699th image (z=699), and cross-section passing through pixel C.
Fig. 17.
Fig. 17. (a) Results of various autofocus algoritms in identifying the focal point at pixel C (200,200) using the same 700 images in Fig. 16; (b) 3D profile obtained by running proposed algorithm through 160,000 iterations (i.e., 400×400).

Tables (2)

Tables Icon

Table 1. Experiment parameter settings

Tables Icon

Table 2. Results of various autofocus algorithms in identifying focal point at pixel C

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

{ i 1 = i c N x 1 2 , i 2 = i c + N x 1 2 , j 1 = j c N y 1 2 , j 2 = j c + N y 1 2 .
V s ( i , j , z ) = [ I ( i , j , z ) I ( i 1 , j , z ) ] a b s { [ I ( i , j , z ) I ( i 1 , j , z ) ] }
{ i f V s ( i , j , z ) × V s ( i 1 , j , z ) = 1 , P 1 ( i , j , z ) = 1 i f V s ( i , j , z ) × V s ( i 1 , j , z ) = + 1 , P 1 ( i , j , z ) = 0
N s ( z ) = i = i 1 i 2 j = j 1 j 2 a b s [ P 1 ( i , j , z ) ] .
F ( u ) = z = 0 M 1 N s ( z ) × e j 2 π ( u z M )
{ K 1 ( u ) = 1 , i f M 1 2 C b u M 1 2 + C b K 1 ( u ) = 0 , o t h e r s
{ K 2 ( u ) = 0 , i f M 1 2 C b u M 1 2 + C b K 2 ( u ) = 1 , o t h e r s
f 1 ( z ) = 1 M u = 0 M 1 a b s ( F ( u ) × K 1 ( u ) × e j 2 π ( u z M ) )
f 2 ( z ) = 1 M u = 0 M 1 a b s ( F ( u ) × K 2 ( u ) × e j 2 π ( u z M ) )
M i = z , i f [ f 1 ( z ) f 1 ( z 1 ) ] × [ f 1 ( z + 1 ) f 1 ( z ) ] a b s { [ f 1 ( z ) f 1 ( z 1 ) ] × [ f 1 ( z + 1 ) f 1 ( z ) ] } = 1
arg min Q ( M i ) = f 1 ( M i ) 1 V b z = M i V b 1 2 M i + V b 1 2 a b s [ f 2 ( z ) ]
I f r i n g e 1 = cos ( 4 π λ 1 x × r o w s c a l e ) + cos ( 4 π λ 2 x × r o w s c a l e ) + cos ( 4 π λ 3 x × r o w s c a l e )
I f r i n g e 2 = I f r i n g e 1 M a x ( I f r i n g e 1 ) × I s c a l e

Metrics