Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Marker-free stitching deflectometry for three-dimensional measurement of the specular surface

Open Access Open Access

Abstract

Due to the ‘invisible’ property of the specular surface, it is difficult for the stitching deflectometry to identify the overlapping area. Previously, markers were used on the unit under test with a roughly known shape to find the overlapping area. We propose a marker-free stitching deflectometry that utilizes the stereo-iterative algorithm to calculate the sub-aperture point cloud without height-slope ambiguity, and the overlapping area is identified with the point cloud datum. The measured area is significantly enlarged. The simulation and experiments are conducted to verify the proposal and evaluate the accuracy. We test a high-quality flat with 190mm diameter, the measurement error is below 100nm RMS with comparison to the interferometer.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical elements with specular surface, especially aspheric and free-form surfaces, have a variety of applications in modern optical systems, such as large astronomical telescope [1], imaging lens, head-up display system for driver assistance as well as the laser inertial confinement fusion facility [2] etc. New design, manufacturing and testing techniques of aspheric and free-form surface provide larger field of view, larger numerical aperture, and better packaging performances. The deflectometry [36] is a very promising way to test specular surface, providing advantages such as non-contact, large dynamic range and high accuracy.

Deflectometry is a slope metrology based on the law of reflection, and the slope is further integrated to reconstruct the surface. The slope is measured by sensing the deflection of ray that is reflected off the surface of the unit under test (UUT). However, the problem called height-slope ambiguity needs to be addressed as the slope and height cannot be simultaneously determined without extra device or information. The deflectometry can be categorized into active deflectometry, passive deflectometry and mono-deflectometry. The active deflectometry [710] uses multiple screen locations to determine the source ray direction from screen, and the passive deflectometry [3,11,12] uses two-camera configuration to realize stereo measurement. The software configurable optical test system [6,13] or mono-deflectometry is developed to test specular surface with a single camera and fixed screen configuration, the height-slope ambiguity is solved by roughly knowing the nominal shape of the UUT. Huang et. al. [14] establishes a polynomial model representation method to reconstruct the surface of the UUT by optimizing the polynomial coefficient. This method is prone to the selected initial, that is, a pre-known nominal shape.

The aforementioned deflectometry can measure specular surface with a limited size. It is difficult to measure large size optics. To ensure the light emitted from the screen are reflected off the UUT and captured by the camera, either a very large screen is used or only a very small part of the UUT is measured. For conventional passive deflectometry, the measured area is usually even smaller than a mono-deflectometry configuration. A common way to enlarge the testing aperture is sub-aperture stitching, mainly applied in interferometer [15,16] and Hartmann-Shack test [17]. The sub-aperture stitching in deflectometry is difficult as the cameras ‘see’ only the surroundings instead of the UUT itself. Therefore, the markers on the UUT are used to identify the overlapping area between sub-apertures [18]. The markers can harm the surface quality of the UUT, this is undesirable in a non-contact optical testing. Moreover, markers are only applicable for near-plane surface or UUT with known shape, because the sub-aperture slope is calculated with mono-deflectometry, in which the stereo information is abandoned.

In this paper, we propose a marker-free stitching deflectometry for measurement of specular surface with enlarged measured area. The point cloud within the sub-aperture is measured based on a stereo-iterative reconstruction algorithm. And the sub-apertures from different camera views are stitched together with the proposed stitching method. The major contributions of our proposal are:

  • a) Apart from monoscopic sub-aperture calculation, the stereo-iterative algorithm is proposed. This allows the point cloud calculation of sub-apertures without height-slope ambiguity.
  • b) The use of marker on the UUT is eliminated, as the point cloud already carries overlapping area data.
  • c) The stitching algorithm is proposed to eliminate stitching errors between sub-apertures and provide a final full-aperture height map.
  • d) The measured area is significantly enlarged compared to conventional passive deflectometry.

The rest of this paper is organized as follows. In section 2, the theoretical basis of sub-aperture calculation and stitching algorithm is described in detail. In section 3, the validity and accuracy of the proposed method are evaluated with numerical simulation and experiments. The last section concludes the work.

2. Theory

The sub-aperture stitching deflectometry system consists of two (or more) cameras and a screen, the UUT is placed in front of the cameras and the screen. A two-camera system is shown in Fig. 1(a). The screen is used to display structure light patterns, such as sinusoidal fringe and randomly distributed speckle. The pattern is reflected off the specular surface of the UUT and captured by the cameras. With a limited size of the screen, each camera can observe the displayed pattern through a sub-aperture on the UUT, the captured images are illustrated in Fig. 1(a). The conventional passive deflectometry can only measure the overlapping area between sub-apertures while the proposed stitching deflectometry intends to measure the combined area of the sub-apertures, as shown in Fig. 1(b).

 figure: Fig. 1.

Fig. 1. The schematic of (a) the two cameras stitching deflectometry (b) the sub-aperture stitching

Download Full Size | PDF

The overall procedure of the proposed stitching deflectometry is:

Step 1: Camera and system calibration. The internal matrixes of the cameras ${A_{cn}}$, distortion parameters of the cameras ${k_{cn}}$, the rotation and translation matrixes from screen coordinate to camera coordinate ${R_{s2cn}}$ and ${T_{s2cn}}$ are determined, the subscript n denotes the camera order.

Step 2: Calculate sub-aperture point cloud. For each camera view, a reference point (RP) is selected, the three-dimensional position of which is determined by a stereo searching algorithm. Then based on the RP, the sub-aperture point cloud is calculated with an iterative reconstruction. The combined method is referred to as the stereo-iterative algorithm.

Step3: Stitch the sub-apertures. The overlapping area is defined as the common area of the point cloud in sub-apertures, the stitching coefficient is calculated within the overlapping area by a fitting method. The stitching error between sub-apertures is then removed and the full-aperture height map is obtained.

In this work, the camera calibration is accomplished with Zhang’s method [19], and the stereo deflectometry system is calibrated with Ren’s method [20]. In the followed sub-section 2.1 and 2.2, the sub-aperture calculation and the stitching algorithm are elaborated.

2.1 Sub-aperture calculation

The point cloud data in a sub-aperture is calculated by a method combining RP stereo searching algorithm and a sub-aperture iterative reconstruction. We demonstrate the calculation of the sub-aperture of the main camera with the help of the auxiliary camera. The subscript $ma$ and $au$ denote the main camera and the auxiliary camera, respectively.

We firstly introduce the RP stereo searching algorithm for the three-dimensional coordinate of a chosen $R{P_{ma}}$ in the overlapping area, as shown in Fig. 2. The $R{P_{ma}}$ corresponds to a pixel in the main camera ${p_{ma}}$. In the coordinate system of the main camera, assume the height of $R{P_{ma}}$ is h. We use the superscript $mc$ and $ac$ to indicate that the coordinate is in the main and auxiliary camera coordinate system, respectively. The coordinate of $RP_{ma}^{mc}({x_{rpma}^{mc},y_{rpma}^{mc},z_{rpma}^{mc}} )$ is:

$$x_{rpma}^{mc} = (h - z_{cma}^{mc})\frac{{i_{xma}^{mc}}}{{i_{zma}^{mc}}} + x_{cma}^{mc},y_{rpma}^{mc} = (h - z_{cma}^{mc})\frac{{i_{yma}^{mc}}}{{i_{zma}^{mc}}} + y_{cma}^{mc},z_{rpma}^{mc} = h$$
where the unit vector of the camera probe ray $\vec{i}_{ma}^{mc}({i_{xma}^{mc},i_{yma}^{mc},i_{zma}^{mc}} )$ is determined with the internal matrixes of the main camera ${A_{cma}}$ and the distortion parameter of the main camera ${k_{cma}}$. The unit vector of surface normal $\vec{n}_{ma}^{mc}$ is:
$$\overrightarrow n _{ma}^{mc} = \frac{{{{\overrightarrow {RP_{ma}^{mc}C_{ma}^{mc}} } / {||{\overrightarrow {RP_{ma}^{mc}C_{ma}^{mc}} } ||}} + {{\overrightarrow {RP_{ma}^{mc}S_{ma}^{mc}} } / {||{\overrightarrow {RP_{ma}^{mc}S_{ma}^{mc}} } ||}}}}{{||{{{\overrightarrow {RP_{ma}^{mc}C_{ma}^{mc}} } / {||{\overrightarrow {RP_{ma}^{mc}C_{ma}^{mc}} } ||}} + {{\overrightarrow {RP_{ma}^{mc}S_{ma}^{mc}} } / {||{\overrightarrow {RP_{ma}^{mc}S_{ma}^{mc}} } ||}}} ||}}$$
where $C_{ma}^{mc} = ({x_{cma}^{mc},y_{cma}^{mc},z_{cma}^{mc}} )= ({0,0,0} )$ and $S_{ma}^{mc}({x_{sma}^{mc},y_{sma}^{mc},z_{sma}^{mc}} )$ is the screen source point with respect to ${p_{ma}}$:
$${\left[ {\begin{array}{cccc} {x_{sma}^{mc}}&{y_{sma}^{mc}}&{z_{sma}^{mc}}&1 \end{array}} \right]^T} = \left[ {\begin{array}{{cc}} {{R_{s2cma}}}&{{T_{s2cma}}}\\ {{0^T}}&1 \end{array}} \right]{\left[ {\begin{array}{{cccc}} {\frac{{{\phi_{xma}}}}{{2\pi }}T}&{\frac{{{\phi_{yma}}}}{{2\pi }}T}&0&1 \end{array}} \right]^T}$$
where T is the period of sinusoidal fringe and ${\phi _{xma}}$ and ${\phi _{yma}}$ are the phase with respect to ${p_{ma}}$.

 figure: Fig. 2.

Fig. 2. The schematic of stereo searching algorithm for the reference point.

Download Full Size | PDF

By now, the unit vector of surface normal $\vec{n}_{ma}^{mc}$ is obtained under the assumption of h. Then we trace the ray emitted from ${C_{au}}$, reflected at $R{P_{ma}}$ and strikes the screen at ${S_{au}}$. The unit vector of the reflected ray $\vec{r}_{au}^{mc}$ is:

$$\overrightarrow r _{au}^{mc} = \overrightarrow i _{au}^{mc} - 2(\overrightarrow i _{au}^{mc} \cdot \overrightarrow n _{au}^{mc})\overrightarrow n _{au}^{mc}$$
with
$$\overrightarrow i _{au}^{mc} = \frac{{\overrightarrow {C_{au}^{mc}RP_{ma}^{mc}} }}{{||{\overrightarrow {C_{au}^{mc}RP_{ma}^{mc}} } ||}}$$
where $C_{au}^{mc} = ({x_{cau}^{mc},y_{cau}^{mc},z_{cau}^{mc}} )$ is transformed from the coordinate system of the auxiliary camera to the coordinate system of the main camera with Eq. (6):
$$\left[ {\begin{array}{{c}} {x_{cau}^{mc}}\\ {y_{cau}^{mc}}\\ {z_{cau}^{mc}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{{cc}} {{R_{s2cma}}}&{{T_{s2cma}}}\\ {{0^T}}&1 \end{array}} \right]{\left[ {\begin{array}{{cc}} {{R_{s2cau}}}&{{T_{s2cau}}}\\ {{0^T}}&1 \end{array}} \right]^{ - 1}}\left[ {\begin{array}{{c}} 0\\ 0\\ 0\\ 1 \end{array}} \right]$$

Then the reflected ray $\vec{r}_{au}^{mc}$ from $RP_{ma}^{mc}$ intersects with the screen:

$${\left[ {\begin{array}{{ccc}} {x_{sau}^{mc}}&{y_{sau}^{mc}}&{z_{sau}^{mc}} \end{array}} \right]^T} = {R_{s2cma}}{\left[ {\begin{array}{{ccc}} {\frac{{{\phi_{xau}}}}{{2\pi }}T}&{\frac{{{\phi_{yau}}}}{{2\pi }}T}&0 \end{array}} \right]^T} + {T_{s2cma}}$$
$$k = \frac{{x_{sau}^{mc} - x_{rpma}^{mc}}}{{r_{xau}^{mc}}} = \frac{{y_{sau}^{mc} - y_{rpma}^{mc}}}{{r_{yau}^{mc}}} = \frac{{z_{sau}^{mc} - z_{rpma}^{mc}}}{{r_{zau}^{mc}}}$$
where, $({x_{sau}^{mc},y_{sau}^{mc},z_{sau}^{mc}} )$ is the coordinate of $S_{au}^{mc}$ in the coordinate system of main camera, $({r_{xau}^{mc},r_{yau}^{mc},r_{zau}^{mc}} )$ is the unit vector of the reflected ray $\vec{r}_{au}^{mc}$, ${\phi _{xau}}$ and ${\phi _{yau}}$ are the phase of the intersection and k is a scale factor. Combining Eq. (7) and Eq. (8), the phase of the intersection ${\phi _{xau}}$ and ${\phi _{yau}}$ is given by:
$${\left[ {\begin{array}{{ccc}} {\frac{{{\phi_{xau}}}}{{2\pi }}T}&{\frac{{{\phi_{yau}}}}{{2\pi }}T}&k \end{array}} \right]^T} = {\left[ {\begin{array}{{ccc}} {r_{s2cma}^1}&{r_{s2cma}^2}&{ - \overrightarrow r_{au}^{mc}} \end{array}} \right]^{ - 1}}(\left[ {\begin{array}{{c}} {x_{rpma}^{mc}}\\ {y_{rpma}^{mc}}\\ {z_{rpma}^{mc}} \end{array}} \right] - {T_{s2cma}})$$
where $r_{s2cma}^1$ and $r_{s2cma}^2$ are the first and second column of ${R_{s2cma}}$.

Secondly, $R{P_{ma}}$ can be transformed into the coordinate system of the auxiliary camera with Eq. (10) and projected onto the CCD of the auxiliary camera with the internal matrixes of the auxiliary camera ${A_{cau}}$ and the distortion parameter of the auxiliary camera ${k_{cau}}$, the projected point on CCD is ${p_{au}}$.

$$\left[ {\begin{array}{{c}} {x_{rpma}^{ac}}\\ {y_{rpma}^{ac}}\\ {z_{rpma}^{ac}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{{cc}} {{R_{s2cau}}}&{{T_{s2cau}}}\\ {{0^T}}&1 \end{array}} \right]{\left[ {\begin{array}{{cc}} {{R_{s2cma}}}&{{T_{s2cma}}}\\ {{0^T}}&1 \end{array}} \right]^{ - 1}}\left[ {\begin{array}{{c}} {x_{rpma}^{mc}}\\ {y_{rpma}^{mc}}\\ {z_{rpma}^{mc}}\\ 1 \end{array}} \right]$$

The extracted phase with respect to ${p_{au}}$ is $\phi _x^\mathrm{^{\prime}}$ and $\phi _y^\mathrm{^{\prime}}$. The height h is searched to minimize the objective function:

$$Fun. = {({\phi _{xau}} - \phi {^{\prime}_x})^2} + {({\phi _{yau}} - \phi {^{\prime}_y})^2}$$

With the optimized h, the three-dimensional coordinates of $RP_{ma}^{mc}$ is determined by Eq. (1), which is in the coordinate system of the main camera.

Based on the calculated $RP_{ma}^{mc}({x_{rpma}^{mc},y_{rpma}^{mc},z_{rpma}^{mc}} )$, the sub-aperture point cloud $M_{ma}^{mc}(x_{mma}^{mc},y_{mma}^{mc}, z_{mma}^{mc} )$ can be iteratively reconstructed [21] as show in Fig. 3(a). We started with an initial assumption that the UUT is a plane perpendicular to the z-axis of coordinate system of the main camera, $z_{mma}^{mc} = z_{rpma}^{mc}$. The initial $M_{ma}^{mc}$ is then calculated with:

$$x_{mma}^{mc} = (z_{mma}^{mc} - z_{cma}^{mc})\frac{{i_{xma}^{mc}}}{{i_{zma}^{mc}}} + x_{cma}^{mc},y_{mma}^{mc} = (z_{mma}^{mc} - z_{cma}^{mc})\frac{{i_{yma}^{mc}}}{{i_{zma}^{mc}}} + y_{cma}^{mc},z_{mma}^{mc} = z_{rpma}^{mc}$$

 figure: Fig. 3.

Fig. 3. (a)The schematic of iterative reconstruction for sub-aperture point cloud, (b) The flowchart of the iterative reconstruction.

Download Full Size | PDF

The slope data at ${M_{ma}}$ is determined with Eq. (13):

$${S_x} ={-} \frac{{\frac{{(x_{mma}^{mc} - x_{sma}^{mc})}}{{{d_{m2s}}}} + \frac{{(x_{mma}^{mc} - x_{cma}^{mc})}}{{{d_{m2c}}}}}}{{\frac{{(z_{mma}^{mc} - z_{sma}^{mc})}}{{{d_{m2s}}}} + \frac{{(z_{mma}^{mc} - z_{cma}^{mc})}}{{{d_{m2c}}}}}},{S_y} ={-} \frac{{\frac{{(y_{mma}^{mc} - y_{sma}^{mc})}}{{{d_{m2s}}}} + \frac{{(y_{mma}^{mc} - y_{cma}^{mc})}}{{{d_{m2c}}}}}}{{\frac{{(z_{mma}^{mc} - z_{sma}^{mc})}}{{{d_{m2s}}}} + \frac{{(z_{mma}^{mc} - z_{cma}^{mc})}}{{{d_{m2c}}}}}}$$
with
$${d_{m2s}} = ||{\overrightarrow {{M_{ma}}{S_{ma}}} } ||,{d_{m2c}} = ||{\overrightarrow {{M_{ma}}{C_{ma}}} } ||$$

The slope data is integrated with polynomial modal method to reconstruct the height data, the reconstructed height w can be expressed as:

$$\left\{ \begin{array}{l} w = C_{poly}^{2 - k} \cdot Pol{y^{2 - k}} + C_{poly}^1\\ w({x_{rpma}^{mc},y_{rpma}^{mc}} )= z_{rpma}^{mc} \end{array} \right.$$
$Pol{y^{2 - k}}$ and $C_{poly}^{2 - k}$ are the second to $k$th term polynomial and polynomial coefficient, respectively. $C_{poly}^{2 - k}$ is determined by the modal method, the first term coefficient $C_{poly}^1$ is obtained with Eq. (16):
$$C_{poly}^1\textrm{ = }z_{rpma}^{mc} - C_{poly}^{2 - k} \cdot Pol{y^{2 - k}}({x_{rpma}^{mc},y_{rpma}^{mc}} )$$

Once the height w is reconstructed, it is regarded as a new UUT assumption, based on which the point cloud $M_{ma}^{mc}$ is corrected with Eq. (17):

$$x_{mma}^{mc} = (w - z_{cma}^{mc})\frac{{i_{xma}^{mc}}}{{i_{zma}^{mc}}} + x_{cma}^{mc},y_{mma}^{mc} = (w - z_{cma}^{mc})\frac{{i_{yma}^{mc}}}{{i_{zma}^{mc}}} + y_{cma}^{mc},z_{mma}^{mc} = w$$

The corrected point cloud is substituted back into Eq. (13) to calculate the new slope data. The flowchart is demonstrated in Fig. 3(b). Output the point cloud $M_{ma}^{mc}$ when:

$$|{{w^{j + 1}} - {w^j}} |< \varepsilon$$
where $\mathrm{\varepsilon }$ is the condition of convergence and j is the iteration ordinal.

2.2 Stitching algorithm

After sub-aperture calculation, the point cloud $M_{sub1}^s({x_{sub1}^s,y_{sub1}^s,z_{sub1}^s} )$ and $M_{sub2}^s(x_{sub2}^s,y_{sub2}^s, z_{sub2}^s )$ are obtained and united in the screen coordinate system, the superscript s indicates the screen coordinate system and the subscript $sub1$ and $sub2$ indicate the sub-aperture of camera 1 and 2, respectively. The overlapping area is nearly rectangular, we can find the maximum and minimum of $x_{sub1}^s$ and $x_{sub2}^s$, the x boundary vector is defined as $bx = [{\max ({x_{sub1}^s} ),\; \min ({x_{sub1}^s} ),\; \max ({x_{sub2}^s} ),\; \min ({x_{sub2}^s} )} ]$. Then the maximum and minimum of $bx$ is removed from the x boundary vector, the remaining intermediate two elements of $bx$ are the x boundary of the overlapping area. In the same way we can determine the y boundary of the overlapping area. Within the overlapping area, the point cloud $M_{sub1}^s$ and $M_{sub2}^s$ are resampled to $({{x_{re}},{y_{re}},{z_{resub1}}} )$ and $({{x_{re}},{y_{re}},{z_{resub2}}} )$, respectively. The difference between ${z_{resub1}}$ and ${z_{resub2}}$ is regarded as a stitching error to be removed:

$$[{{z_{resub1}} - {z_{resub2}}} ]= \left[ {\begin{array}{{ccc}} 1&{{x_{re}}}&{{y_{re}}} \end{array}} \right] \times {\left[ {\begin{array}{{ccc}} {c1}&{c2}&{c3} \end{array}} \right]^T}$$
where ${\left[ {\begin{array}{{ccc}} {c1}&{c2}&{c3} \end{array}} \right]^T}$ is the error coefficient. Then the stitching error is removed from the point cloud $M_{sub2}^{s\ast }({x_{sub2}^s,y_{sub2}^s,z_{sub2}^{s\ast }} )$ and stitched to $M_{sub1}^s$:
$$z_{sub2}^{s \ast } = \left[ {\begin{array}{{ccc}} 1&{x_{sub2}^s}&{y_{sub2}^s} \end{array}} \right] \times {\left[ {\begin{array}{{ccc}} {c1}&{c2}&{c3} \end{array}} \right]^T} + z_{sub2}^s$$

The point cloud $[{M_{sub1}^s,M_{sub2}^{s\ast }} ]$ is the final full aperture result.

3. Verification

3.1 Simulation

The simulation establishes a stitching deflectometry system by setting the system parameters. The given internal matrixes of the cameras ${A_{cn}}$, distortion parameters of the cameras ${k_{cn}}$, the rotation and translation matrixes from screen coordinate to camera coordinate ${R_{s2cn}}$ and ${T_{s2cn}}$ are demonstrated in Table 1. The camera resolution is $966 \times 1296$ pixel, and the screen resolution is $1200 \times 1600$ pixel with a pixel pitch of 0.27051mm.

Tables Icon

Table 1. System parameters for the simulation.

The UUT is modeled by a set of polynomials with a diameter of 160mm. The coordinates of the screen source point and the corresponding phase is determined by tracing the ray emitted from the cameras to intersect with the UUT, reflect off it and strike the screen. The sub-aperture is calculated with the proposed stereo-iterative algorithm, Zernike polynomial is used to reconstruct the height from the slope data with modal method [22,23]. The sub-aperture point cloud data is shown in Fig. 4. Sub-apertures are stitched to reconstruct the absolute height of the UUT. The reconstruction result of the UUT is shown in Fig. 5. The reconstructed absolute height of the UUT is shown in Fig. 5(a), the height difference to the ground truth is around 75.5nm, as shown in Fig. 5(b). The piston and tilt terms are removed from the height to demonstrate reconstructed shape of the UUT. The shape and shape error are shown in Fig. 5(c) and 5(d), respectively. The departure of the calculated shape is below 1nm.

 figure: Fig. 4.

Fig. 4. Sub-aperture point cloud of the two cameras (simulation).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Simulation results: (a) the reconstructed absolute height of the UUT, (b) the height error, (c) reconstructed shape of the UUT (piston and tilt removed), (d) the shape error.

Download Full Size | PDF

3.2 Testing a high-quality flat and a wafer with free-form deformation

An experimental setup is established to verify the marker-free stitching deflectometry, as demonstrated in Fig. 6. The test system consists of an LCD screen with $1200 \times 1600$ pixel, two cameras with $966 \times 1296$ pixel and the UUTs. A high-quality flat with out-of-plane deviation below $1/10$ wavelength is measured to demonstrate the accuracy. A wafer with unknown free-form deformation is also measured. The system parameters ${A_{cn}}$, ${k_{cn}}$, ${R_{s2cn}}$ and ${T_{s2cn}}$ are delicately calibrated before the measurement. The 8-step phase shifting algorithm [24] and 3-heterodyne temporal unwrapping algorithm [25] are used to extract the phase distribution as well as the coordinates of screen source point. The Zernike polynomial is used as the polynomial set for modal integration method [22,23].

 figure: Fig. 6.

Fig. 6. Experimental test system and UUT.

Download Full Size | PDF

For the flat measurement, the captured fringe images from the two cameras are shown in Fig. 7. The selected reference points are obviously in the overlapping area. The sub-aperture calculation is conducted separately to obtain the point cloud data, which is shown in Fig. 8. Then the stitching algorithm is used to provide the full-aperture, as illustrated in Fig. 9(a). Piston and tilt terms are removed from the height to show the out-of-plane deviation measured by the proposed method, as shown in Fig. 9(b), with 86.3nm RMS and 517.0nm PV. As comparison, the result measured by Fizeau interferometer is demonstrated in Fig. 9(c), with 6.0nm RMS and 40.1nm PV. The proposed stitching deflectometry can achieve 100nm error in RMS within an aperture of 190mm.

 figure: Fig. 7.

Fig. 7. Captured images of the cameras and the selected reference points (UUT: flat).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Sub-aperture point cloud of the two cameras (UUT: flat).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Results of stitching deflectometry (UUT: flat): (a) full aperture height, (b) the result with piston and tilt removed, (c) interferometer result.

Download Full Size | PDF

Though delicate calibration is conducted, there is a nearly 100nm RMS difference. The comparison of zernike coefficient between stitching deflectometry and interferometer is given in Fig. 10(a). This error could arise from the sub-aperture calculation (systematic error) and/or the stitching algorithm. The systematic error is introduced by non-ideal setup, such as non-ideal camera model [26] and imperfect screen [7,27,28] with out-of-plane shape and refraction of the covering glass, etc. We remove the tilt and piston from Fig. 8 to demonstrate the shape of sub-aperture before the stitching, as shown in Fig. 10(b) and 10(c). Figure 10(b) and 10(c) share identical features with Fig. 9(b), as well as the RMS and PV magnitude, indicating that the stitching hardly induces extra large errors. On the other hand, the stitching model includes only piston and tilt, which in theory hardly introduces high-order error to the stitched aperture $M_{sub2}^{s\ast }$ according to Eq. (20). The stitching residual with respect to Eq. (19) can be calculated with:

$$residual = [{{z_{resub1}} - {z_{resub2}}} ]- \left[ {\begin{array}{{ccc}} 1&{{x_{re}}}&{{y_{re}}} \end{array}} \right] \times {\left[ {\begin{array}{{ccc}} {c1}&{c2}&{c3} \end{array}} \right]^T}$$

 figure: Fig. 10.

Fig. 10. (a) Comparison of the first 30 Zernike coefficient between the stitching delfectometry and interferometer, piston and tilt are removed, (b) camera 1 sub-aperture shape with piston and tilt removed, (c) camera 2 sub-aperture shape with piston and tilt removed, (d) residual of stitching in the overlapping area.

Download Full Size | PDF

We demonstrate the residual map in Fig. 10(d), the residual is at a similar order of magnitude to the systematic error. The residual map shows the difference excluding piston and tilt in the overlapping area, artifacts like the around 0.7 micrometer rise in the bottom-right corner are not fitted and stitched into Fig. 9(b). Therefore, the measurement accuracy is mainly affected by the systematic error introduced by non-ideal setup instead of the stitching algorithm. To further improve the measurement accuracy, either calibration of systematic error is required or imperfection of setup should be addressed.

For the wafer measurement, the captured fringe images from the two cameras are shown in Fig. 11. With the selected reference point, the stereo-iterative algorithm calculates the sub-aperture point cloud, as shown in Fig. 12. The stitched full aperture point cloud is demonstrated in Fig. 13(a). After removing the piston and tilt, the shape of the wafer shows a free-form in Fig. 13(b), with 27.8um RMS and 203.6um PV. For comparison, we test the UUT with an advanced active deflectometry [9]. Placing the screen in multiple (at least two) positions, the nearly full aperture of the UUT is measured. The point cloud obtained by intersecting the incident and reflected rays are illustrated in Fig. 13(c). The reconstructed height of the proposed stitching deflectometry is slightly smoother than that of the active deflectometry. The point cloud obtained by intersection is used to calculate slope data, and the slope is integrated to reconstruct a shape. The result of the active deflectometry with piston and tilt removed is shown in Fig. 13(d), with 29.1um RMS and 200.7um PV. There are some missing regions that the active deflectometry fails to measure. The proposed method reconstructs a smoother height, therefore the RMS value of the stitching deflectometry is smaller than that of the active deflectometry. The comparison of zernike coefficient between the two methods is given in Fig. 14. The distributions of coefficient show a nice similarity.

 figure: Fig. 11.

Fig. 11. Captured images of the cameras and the selected reference points (UUT: wafer).

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Sub-aperture point cloud of the two cameras (UUT: wafer).

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Results (UUT: wafer): (a) full aperture height of stitching deflectometry, (b) the result of stitching deflectometry with piston and tilt removed, (c) height of active deflectometry, (d) the result of active deflectometry with piston and tilt removed.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Comparison of the first 30 Zernike coefficient between the proposed stitching delfectometry and active deflectometry, piston and tilt are removed (UUT: wafer).

Download Full Size | PDF

4. Conclusion

In this paper, we propose a stitching deflectometry that utilizes the ambiguity-free stereo-iterative algorithm and eliminates the use of markers on the UUT. The stereo-iterative algorithm is a combination of stereo-algorithm for RP and iterative reconstruction, it is more efficient than the conventional stereo algorithm because the point-by-point stereo searching is avoided. The calculated point cloud of sub-aperture identifies the overlapping area which is used to stitch the sub-apertures together. The stitched aperture is significantly larger than the measured area of the conventional deflectometry, especially passive deflectometry. We verify the proposed method with simulation and experimental measurements. Simulation shows than the proposed method is theoretically correct, the measurement error is below 1nm for a free-form surface with an aperture of 160mm. A high-quality flat with 6nm RMS deviation (in 190mm aperture) is measured. The proposed stitching deflectometry measurement error is below 100nm RMS. The measurement of a wafer with free-form deviation is demonstrated, the result of stitching deflectometry is compared with that of the active deflectometry. The Zernike coefficient of the two methods shows a good agreement. With this method, the deflectometry can measure large size optics by extending the two-camera stitching to multi-camera stitching.

Funding

National Natural Science Foundation of China (61875142, U20A20215); Sichuan University (2020SCUNG205).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. I. Trumper, P. Hallibert, J. W. Arenberg, H. Kunieda, O. Guyon, H. P. Stahl, and D. W. Kim, “Optics technology for large-aperture space telescopes: from fabrication to final acceptance tests,” Adv. Opt. Photonics 10(3), 644–702 (2018). [CrossRef]  

2. R. H. Sawicki, “The National Ignition Facility: laser system, beam line design, and construction,” Proc. SPIE 5341, 43–54 (2004). [CrossRef]  

3. M. C. Knauer, J. Kaminski, and G. Hausler, “Phase measuring deflectometry: a new approach to measure specular free-form surfaces,” Proc. SPIE 5457, 366–376 (2004). [CrossRef]  

4. R. Ritter and R. Hahn, “Contribution to analysis of the reflection grating method,” Optics and Lasers in Engineering 4(1), 13–24 (1983). [CrossRef]  

5. T. Bothe, W. Li, C. von Kopylow, and W. P. Juptner, “High-resolution 3D shape measurement on specular surfaces by fringe reflection,” Proc. SPIE 5457, 411–422 (2004). [CrossRef]  

6. P. Su, R. E. Parks, L. Wang, R. P. Angel, and J. H. Burge, “Software configurable optical test system: a computerized reverse Hartmann test,” Appl. Opt. 49(23), 4404–4412 (2010). [CrossRef]  

7. M. Petz and R. Tutsch, “Reflection grating photogrammetry: a technique for absolute shape measurement of specular free-form surfaces,” Proc. SPIE 5869, 58691D (2005). [CrossRef]  

8. Y. Xiao, S. Li, Q. Zhang, J. Zhong, X. Su, and Z. You, “Optical fringe-reflection deflectometry with bundle adjustment,” Optics and Lasers in Engineering 105, 132–140 (2018). [CrossRef]  

9. X. Zhang, D. Li, and R. Wang, “Active speckle deflectometry based on 3D digital image correlation,” Opt. Express 29(18), 28427–28440 (2021). [CrossRef]  

10. Y. Liu, S. Huang, Z. Zhang, N. Gao, F. Gao, and X. Jiang, “Full-field 3D shape measurement of discontinuous specular objects by direct phase measuring deflectometry,” Sci. Rep. 7(1), 1–8 (2017). [CrossRef]  

11. J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “A geometry and optical property inspection system for automotive glass based on fringe patterns,” Opt. Appl. 40(4), 827–841 (2010).

12. H. Zhang, I. Šics, J. Ladrera, M. Llonch, J. Nicolas, and J. Campos, “Displacement-free stereoscopic phase measuring deflectometry based on phase difference minimization,” Opt. Express 28(21), 31658–31674 (2020). [CrossRef]  

13. P. Su, Y. Wang, J. H. Burge, K. Kaznatcheev, and M. Idir, “Non-null full field X-ray mirror metrology using SCOTS: a reflection deflectometry approach,” Opt. Express 20(11), 12393–12406 (2012). [CrossRef]  

14. L. Huang, J. Xue, B. Gao, C. McPherson, J. Beverage, and M. Idir, “Modal phase measuring deflectometry,” Opt. Express 24(21), 24649–24664 (2016). [CrossRef]  

15. L. Huang, J. Xue, B. Gao, and M. Idir, “One-dimensional angular-measurement-based stitching interferometry,” Opt. Express 26(8), 9882–9892 (2018). [CrossRef]  

16. X. Wang, L. Wang, L. Yin, B. Zhang, D. Fan, and X. Zhang, “Measurement of large aspheric surfaces by annular subaperture stitching interferometry,” Chin. Opt. Lett. 5(11), 645–647 (2007).

17. H. Xu, H. Xian, and Y. Zhang, “Comparison of two stitching algorithms for annular subaperture Hartmann Shack testing method,” Proc. SPIE 7849, 78491Q (2010). [CrossRef]  

18. P. Chen, D. Li, Q. Wang, L. Li, K. Xu, J. Zhao, and R. Wang, “A method of sub-aperture slope stitching for testing flat element based on phase measuring deflectometry,” Optics and Lasers in Engineering 110, 392–400 (2018). [CrossRef]  

19. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on pattern analysis and machine intelligence 22(11), 1330–1334 (2000). [CrossRef]  

20. H. Ren, F. Gao, and X. Jiang, “Iterative optimization calibration method for stereo deflectometry,” Opt. Express 23(17), 22060–22068 (2015). [CrossRef]  

21. R. Wang, D. Li, and X. Zhang, “Systematic error control for deflectometry with iterative reconstruction,” Measurement 168, 108393 (2021). [CrossRef]  

22. C. Zhao and J. H. Burge, “Orthonormal vector polynomials in a unit circle, Part I: basis set derived from gradients of Zernike polynomials,” Opt. Express 15(26), 18014–18024 (2007). [CrossRef]  

23. C. Zhao and J. H. Burge, “Orthonormal vector polynomials in a unit circle, Part II : completing the basis set,” Opt. Express 16(9), 6586–6591 (2008). [CrossRef]  

24. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Optics and Lasers in Engineering 109, 23–59 (2018). [CrossRef]  

25. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Optics and lasers in engineering 85, 84–103 (2016). [CrossRef]  

26. T. Bothe, W. Li, M. Schulte, C. Kopylow, R. B. Bergmann, and W. P. O. Jüptner, “Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology,” Appl. Opt. 49(30), 5851–5860 (2010). [CrossRef]  

27. M. Petz, M. Fischer, and R. Tutsch, “Systematic errors in deflectometry induced by use of liquid crystal displays as reference structure,” 21st IMEKO TC2 Symposium on Photonics in Measurement, (2013).

28. J. Bartsch, J. R. Nüß, M. H. U. Prinzler, M. Kalms, and R. B. Bergmann, “Effects of non-ideal display properties in phase measuring deflectometry: A model-based investigation,” Proc. SPIE 10678, 106780Y (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The schematic of (a) the two cameras stitching deflectometry (b) the sub-aperture stitching
Fig. 2.
Fig. 2. The schematic of stereo searching algorithm for the reference point.
Fig. 3.
Fig. 3. (a)The schematic of iterative reconstruction for sub-aperture point cloud, (b) The flowchart of the iterative reconstruction.
Fig. 4.
Fig. 4. Sub-aperture point cloud of the two cameras (simulation).
Fig. 5.
Fig. 5. Simulation results: (a) the reconstructed absolute height of the UUT, (b) the height error, (c) reconstructed shape of the UUT (piston and tilt removed), (d) the shape error.
Fig. 6.
Fig. 6. Experimental test system and UUT.
Fig. 7.
Fig. 7. Captured images of the cameras and the selected reference points (UUT: flat).
Fig. 8.
Fig. 8. Sub-aperture point cloud of the two cameras (UUT: flat).
Fig. 9.
Fig. 9. Results of stitching deflectometry (UUT: flat): (a) full aperture height, (b) the result with piston and tilt removed, (c) interferometer result.
Fig. 10.
Fig. 10. (a) Comparison of the first 30 Zernike coefficient between the stitching delfectometry and interferometer, piston and tilt are removed, (b) camera 1 sub-aperture shape with piston and tilt removed, (c) camera 2 sub-aperture shape with piston and tilt removed, (d) residual of stitching in the overlapping area.
Fig. 11.
Fig. 11. Captured images of the cameras and the selected reference points (UUT: wafer).
Fig. 12.
Fig. 12. Sub-aperture point cloud of the two cameras (UUT: wafer).
Fig. 13.
Fig. 13. Results (UUT: wafer): (a) full aperture height of stitching deflectometry, (b) the result of stitching deflectometry with piston and tilt removed, (c) height of active deflectometry, (d) the result of active deflectometry with piston and tilt removed.
Fig. 14.
Fig. 14. Comparison of the first 30 Zernike coefficient between the proposed stitching delfectometry and active deflectometry, piston and tilt are removed (UUT: wafer).

Tables (1)

Tables Icon

Table 1. System parameters for the simulation.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

x r p m a m c = ( h z c m a m c ) i x m a m c i z m a m c + x c m a m c , y r p m a m c = ( h z c m a m c ) i y m a m c i z m a m c + y c m a m c , z r p m a m c = h
n m a m c = R P m a m c C m a m c / | | R P m a m c C m a m c | | + R P m a m c S m a m c / | | R P m a m c S m a m c | | | | R P m a m c C m a m c / | | R P m a m c C m a m c | | + R P m a m c S m a m c / | | R P m a m c S m a m c | | | |
[ x s m a m c y s m a m c z s m a m c 1 ] T = [ R s 2 c m a T s 2 c m a 0 T 1 ] [ ϕ x m a 2 π T ϕ y m a 2 π T 0 1 ] T
r a u m c = i a u m c 2 ( i a u m c n a u m c ) n a u m c
i a u m c = C a u m c R P m a m c | | C a u m c R P m a m c | |
[ x c a u m c y c a u m c z c a u m c 1 ] = [ R s 2 c m a T s 2 c m a 0 T 1 ] [ R s 2 c a u T s 2 c a u 0 T 1 ] 1 [ 0 0 0 1 ]
[ x s a u m c y s a u m c z s a u m c ] T = R s 2 c m a [ ϕ x a u 2 π T ϕ y a u 2 π T 0 ] T + T s 2 c m a
k = x s a u m c x r p m a m c r x a u m c = y s a u m c y r p m a m c r y a u m c = z s a u m c z r p m a m c r z a u m c
[ ϕ x a u 2 π T ϕ y a u 2 π T k ] T = [ r s 2 c m a 1 r s 2 c m a 2 r a u m c ] 1 ( [ x r p m a m c y r p m a m c z r p m a m c ] T s 2 c m a )
[ x r p m a a c y r p m a a c z r p m a a c 1 ] = [ R s 2 c a u T s 2 c a u 0 T 1 ] [ R s 2 c m a T s 2 c m a 0 T 1 ] 1 [ x r p m a m c y r p m a m c z r p m a m c 1 ]
F u n . = ( ϕ x a u ϕ x ) 2 + ( ϕ y a u ϕ y ) 2
x m m a m c = ( z m m a m c z c m a m c ) i x m a m c i z m a m c + x c m a m c , y m m a m c = ( z m m a m c z c m a m c ) i y m a m c i z m a m c + y c m a m c , z m m a m c = z r p m a m c
S x = ( x m m a m c x s m a m c ) d m 2 s + ( x m m a m c x c m a m c ) d m 2 c ( z m m a m c z s m a m c ) d m 2 s + ( z m m a m c z c m a m c ) d m 2 c , S y = ( y m m a m c y s m a m c ) d m 2 s + ( y m m a m c y c m a m c ) d m 2 c ( z m m a m c z s m a m c ) d m 2 s + ( z m m a m c z c m a m c ) d m 2 c
d m 2 s = | | M m a S m a | | , d m 2 c = | | M m a C m a | |
{ w = C p o l y 2 k P o l y 2 k + C p o l y 1 w ( x r p m a m c , y r p m a m c ) = z r p m a m c
C p o l y 1  =  z r p m a m c C p o l y 2 k P o l y 2 k ( x r p m a m c , y r p m a m c )
x m m a m c = ( w z c m a m c ) i x m a m c i z m a m c + x c m a m c , y m m a m c = ( w z c m a m c ) i y m a m c i z m a m c + y c m a m c , z m m a m c = w
| w j + 1 w j | < ε
[ z r e s u b 1 z r e s u b 2 ] = [ 1 x r e y r e ] × [ c 1 c 2 c 3 ] T
z s u b 2 s = [ 1 x s u b 2 s y s u b 2 s ] × [ c 1 c 2 c 3 ] T + z s u b 2 s
r e s i d u a l = [ z r e s u b 1 z r e s u b 2 ] [ 1 x r e y r e ] × [ c 1 c 2 c 3 ] T
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.