Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-precision automatic centering method for aspheric optical elements based on machine vision

Open Access Open Access

Abstract

In the machine vision inspection of large-aperture aspheric optical components, the limited field of view and micron-level detection accuracy requirements make sub-aperture scanning imaging indispensable. High-precision scanning depends on the alignment of the spin axis of the mechanical system and the optical axis of the component, so the component needs to be centered before scanning. In view of this problem, this paper proposes a high-precision automatic centering method (HPACM) for rotationally symmetric aspheric optical elements based on machine vision. The goal is to adjust two reference points on the optical axis to the mechanical spin axis, which are the sphere center of the upper surface vertex and the sphere center of the lower surface vertex imaged by the upper surface of the element respectively. Adjust the first point to the spin axis is the Eccentric Error Correction (EEC), and then adjust the second point to the spin axis is the tilt error correction (TEC). In EEC and TEC, the component rotates one circle around the spin axis. If there exists eccentric or tilt error, the crosshair on the image plane will move along a circular trajectory which center on the spin axis. Then use pixel coordinates of crosshair center extraction algorithm (PCCEA) to extract the pixel coordinates of the centers of the crosshair images during the circular motion, and apply the least squares circle fitting algorithm to obtain the trajectory circle of the crosshair centers. The center of the crosshair and the center of the track circle on the image plane correspond to the sphere center of the surface vertex and a point on the spin axis of the object side respectively, and the relative positions of these two points on the image plane can be converted to the object side according to the system parameters. Experimental results show that the proposed HPACM can correct the eccentric and tilt error to within 7um and 0.5’.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Aspheric optical elements have a higher degree of design freedom, and have the characteristics of spherical aberration correction. The use of aspheric elements in the optical systems can effectively improve image quality, correct partial aberrations, and effectively enlarge the field of view [13]. Automated machine vision inspection of surface defects of large-aperture aspheric optical components has always been a problem in the optical inspection industry. The effective field of view of the machine vision system is very limited, and the detection accuracy of defects generally needs to reach the order of micrometers, therefore, detection method based on sub-aperture image scanning has become an indispensable step. The high-precision of sub-aperture image scanning needs to be based on the alignment of the optical axis of the element and the mechanical spin axis, so the element needs to be centered before scanning. At present, the centering work for spherical components is relatively mature, and the commonly used methods are laser centering, three-coordinate measurement, transmission collimation imaging, transmission interferometry, reflection interferometry, and reflection collimation imaging methods [49]. Among them, the reflective collimation imaging method only requires a simple structure but has a large working range, which is widely used in the centering of spherical elements. However, there is no mature solution for the automatic and high-precision centering of aspheric optical components, which is generally based on a reflective centering instrument and a multi-dimensional adjustment table for manual adjustment. The requirements for the proficiency of the operators are high, the efficiency is low, and the calibration results cannot be quantified, and are greatly affected by subjective factors, and cannot be applied to automated production lines. The limited detection capability undoubtedly limits the wide application of aspheric optical elements. In response to this problem, high-precision automatic centering method (HPACM) is proposed in this paper.

The core idea of HPACM is to adjust two reference points on the optical axis to the mechanical spin axis, and can be divided into two steps: EEC and TEC. During EEC, the crosshair rays emitted by the centering instrument are converged to the sphere center of the upper surface vertex of the element. Spin the element around the spin axis, if the sphere center and the spin axis are deviated, the reflected crosshair image will move along a circular motion with the spin axis as the center on the image plane. In EEC, the center of the crosshair image corresponds to the sphere center of the top surface vertex, and the center of the fitting circle corresponds to a point on the mechanical spin axis. The pixel distance between the center of the crosshair image and the center of the fitted circle in the X and Y directions can be obtained. Combined with system parameters, the pixel distance can be converted to the eccentric error in object side, and can be corrected by the translational motors along X and Y directions. After EEC, the optical axis and the mechanical spin axis intersect at a point, and there exists an inclination angle, the purpose of TEC is to eliminate this angle. In TEC, the crosshair rays converging on the sphere center of the lower surface vertex imaged by the upper surface, which needs to be calculated according to the paraxial imaging optical path tracing formulas.

2. Theoretical introduction

2.1 Equations and categories of aspheric optical elements

The aspherical formula is shown as following

$$z = \frac{{c({x^2} + {y^2})}}{{1 + \sqrt {1 - (1 + k){c^2}({x^2} + {y^2})} }} + \sum\limits_{i = 2}^N {{A_{2i}}{{({x^2} + {y^2})}^i}} , $$
where c is the curvature of the vertex ball which is equal to $1/r$, r is the radius of the vertex ball, and k is the conic constant. As shown in Fig. 1, according to the value of k, the aspheric surface can be divided into hyperboloid, paraboloid, ellipsoid and oblate ellipsoid. The first term in the equation is a quadratic term, the second term is a higher-order term.

 figure: Fig. 1.

Fig. 1. Categories of aspheric optical components.

Download Full Size | PDF

2.2 Eccentric and tilt errors

In order to ensure the high-precision reconstruction of the sub-aperture images of the large-aperture aspheric optical element, it is necessary to center the element before sub-aperture scanning, and the centering process is to adjust the optical axis of the element to be aligned with the spin axis of the inspection system. Figure 2(a) and Fig. 2(b) respectively show the condition that the optical axis and mechanical spin axis of the aspheric element only have eccentric error and tilt error. The eccentric error and tilt error are represented by $\varDelta $ and $\vartriangle \varphi $ respectively, where the $\vartriangle \varphi $ is an angle amount.

 figure: Fig. 2.

Fig. 2. (a) is eccentric error and (b) is tilt error between the optical axis and spin axis.

Download Full Size | PDF

2.3 Optical principle

The optical principles of EEC and TEC are the same, the following Fig. 3 illustrates the optical principle of EEC, the optical path diagram of the TEC is similar, the difference is that EEC images the sphere center of the upper surface vertex of the aspherical element, while TEC images the sphere center of the lower surface vertex imaged by the upper surface.

 figure: Fig. 3.

Fig. 3. Optical#path diagram of EEC.

Download Full Size | PDF

In Fig. 3, the aspherical surface drawn by the black solid line is in an ideal state, the optical axis of the element and the mechanical spin axis coincide, and there is no eccentric or tilt error. The crosshair image emitted from the centering instrument collimated by the lens group, incident to the objective lens horizontally, and the light is converged on the sphere center ${O_C}$ of the upper surface vertex of the aspherical element, and then reflected back to the image plane. Because the optical axis of the element is aligned with the mechanical spin axis, the crosshair image formed on the image plane remains stationary during the spin process of the element, as shown by the green crosshair on the image plane, the center of the crosshair corresponds to point ${O_C}$.When there is a deviation between the optical axis of the element and the mechanical spin axis, as shown by the aspheric element drawn by the red dashed line, the sphere center of the upper surface vertex is ${C_0}$. The green incident light will return to the image plane along the red dashed line, the extension line of the reflected light will converge at point ${B_0}$, the crosshair image formed on the image plane is $im{g_{{B_0}}}$. When the aspherical element rotates around the mechanical spin axis one circle, the crosshair image will center on the center of the green crosshair on the image plane, and move around in a circular trajectory. There is a linear relationship between the object side eccentric error $\varDelta $ and the track circle pixel radius ${r_e}$ on the image plane, which can be calculated by the following formula

$$\varDelta = {r_e}\varepsilon /2\Gamma , $$
in which, $\varepsilon $ is the pixel size of imaging device and $\Gamma $ is the magnification of the system. For TEC, it is necessary to locate the position of the sphere center of the lower surface vertex imaged by the upper surface, and the following section 2.4 will introduce this process.

2.4 Paraxial imaging of aspherical system

The aspherical system can be approximated as a coaxial spherical system in the paraxial area, so the optical path tracing can follow the optical path tracing formulas of the paraxial spherical system. The following will introduce the derivation process in conjunction with Fig. 4.

 figure: Fig. 4.

Fig. 4. Relevant parameters of single refractive aspheric paraxial imaging.

Download Full Size | PDF

The paraxial optical system means that the object aperture angle u is very small, and according to $\mathop {\lim }\limits_{\delta u \to 0} \sin (u) = u$, the radian value can be used approximately instead of the sine value. The basic formulas of the paraxial optical path are as follows

$$I = (l - r)u/r, $$
$${I^{\prime}} = nI/{n^{\prime}}, $$
$${u^{\prime}} = I + u - {I^{\prime}}, $$
$${l^{\prime}} = r(1 + {I^{\prime}}/{u^{\prime}}). $$
where r is the radius of the aspheric surface vertex, l is the object intercept, $l^{\prime}$ is the image intercept, u is the object aperture angle, $u^{\prime}$ is the image aperture angle, I is the incident angle, $I^{\prime}$ is the refraction angle, n is the object refractive index, and $n^{\prime}$ is the image refractive index. According to Eqs. (3)∼(6), the object aperture angle ${u_k}$ of the k-th aspheric surface can be derived
$${u_k} = {h_k}/{l_k}. $$
where ${h_k}$ is the radius of the beam incident on the k-th aspheric surface, and ${l_k}$ is the object intercept of the k-th aspheric surface, from which the image side aperture angle ${u_k}^\prime $ of the k-th aspheric surface can be derived
$${u_k}^\prime = ({h_k}/{r_k}({n_k} - {n_{k + 1}}) + {n_{k + 1}}{u_k}){n_k}$$

From Eq. (8), it can be deduced that the image intercept ${l_k}^\prime $ of the k-th surface is

$${l_k}^\prime = {h_k}/{u_k}^\prime . $$

Simplify Eq. (9) to get

$${l_k}^\prime = \frac{{{r_{k\textrm{ - }1}}({r_k} - {d_{k - 1}})}}{{{n_{k\textrm{ - }1}}({n_{k\textrm{ - }1}} - {n_k})({r_k} - {d_{k - 1}}) + {n_{k - 1}}{n_k}{r_{k - 1}}}}. $$

For a single aspheric element, there are only two surfaces, the distance ${l_2}^\prime $ between the sphere center of the lower surface vertex after being imaged by the upper surface (Hereinafter referred to as the sphere center image of the lower surface vertex) and the lower surface is

$${l_2}^\prime = \frac{{{r_1}({r_2} - {d_1})}}{{{n_1}({n_1} - {n_2})({r_2} - {d_1}) + {n_1}{n_2}{r_1}}}, $$
in which, ${r_1},{r_2}$ are the radius of the upper and lower surfaces of the element, ${d_1}$ is the thickness, ${n_1} = 1$ and ${n_2}$ are the refractive index of air and material of element.

3. HPACM introduction

The automatic optical centering system consists of a centering instrument, an adjustment table and a computer. The analysis of the optical path in section 2.3 shows that the aspheric optical element can image the crosshair at both the sphere center of the upper surface vertex and the sphere center image of the lower surface vertex. And the crosshair images at these two positions will be used as the references for quantifying the amount of eccentric and tilt error. In Fig. 5, from bottom to top, the adjustment table has mechanical spin axis $\alpha $, axis $x,y$ that can translate in X and Y directions, and swing axis ${\varphi _x},{\varphi _y}$ that can swing in X and Y directions. The eccentric error is corrected by adjusting $x,y$, and the tilt error is corrected by adjusting ${\varphi _x},{\varphi _y}$.In conjunction with Fig. 6, the following will introduce the proposed HPACM.

 figure: Fig. 5.

Fig. 5. Schematic diagram of automatic centering system.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Flow chart of HPACM.

Download Full Size | PDF

First, initialize the system, then roughly focus the crosshair to the vertex of the upper surface of the element, and acquires a crosshair image. EEC and TEC only require the center coordinates of the crosshair image, and the size of the collected crosshair image is very large, which is not conducive to the real-time realization of the subsequent image processing. Therefore, this paper proposes an Automatic ROI Acquisition Algorithm (ARAA) to automatically extract the center ROI of the crosshair, which will be introduced in section 3.1. In order to automate the centering process, this paper proposes an Autofocus Algorithm (AFA) and will be introduced in section 3.2. After both ARAA and AFA are executed, focus the crosshair light to the sphere center of the upper surface vertex of the component, execute EEC, and then focus the crosshair light to the sphere center image of the lower surface vertex, and execute TEC. EEC and TEC will be introduced in section 3.3. In view of the existence of system and random errors, correction only once may not enough, thus, we set the iteration termination condition that the eccentric error is no larger than 10um, tilt error is no larger than 1’, and it is found in experiments that the above constraints can generally be met by at most 3 iterations. The following will introduce the algorithms marked in bold red in Fig. 6 in order respectively.

3.1 Automatic ROI acquisition algorithm

Due to the high resolution of the collected crosshair images, and each set of experiment needs to process 12 images, a complete centering process generally requires 9 to 13 sets of experiment, and 108 to 156 images need to be processed, which is very time-consuming. Since the eccentric or tilt correction process only needs to know the coordinates of the crosshair center, in order to reduce the time complexity of image processing, we designed and implemented ARAA. The purpose of ARAA is to extract the area where the crosshair center is located from the whole crosshair image, and record the pixel coordinates of the upper left corner of the ROI. The central area of the crosshair image is the clearest, and can be located by utilizing the definition evaluation function. The definition evaluation function utilized in this article is the normalized SMD2 (it is still called SMD2 for convenience) in consideration of its excellent sensitivity performance. The expression of SMD2 is as follows

$$SMD2 = \sum\limits_{x,y = 1,1}^{i,j} {\frac{{|f(x,y) - f(x - 1,y)|\cdot |f(x,y) - f(x,y - 1)|}}{{255ij}}}. $$

In which, i and j are the pixel width and height of the image, $f(x,y)$ is the gray value at pixel coordinate $(x,y)$. Based on the SMD2 definition evaluation function, searching for the clearest ROI which contains the crosshair center on the entire image. Next, records the top left corner of the ROI so as to locate the relative position of the crosshair center on the entire image. Follow-up processing will only be performed on the ROI which greatly reduces the calculation amount. The ARAA is introduced based on Fig. 7 as follows. First of all, set the width W and height H of the sub-area and the step S which defines the search step to the right or down each time on the input image, and calculate the SMD2 values of all the sub-areas in parallel on the input image. Then, store the SMD2 values in the mapping table data structure in order. Ultimately, arrange the mapping table according to the key values in descending order, and the corresponding sub-region rank at the head of the mapping table is the ROI with the highest definition score on the image, that is, the region where the crosshair center is located, and then returns the coordinate of the upper left corner of this ROI region, in Fig. 7, the targeted ROI is ${A_{ij}}$, and the upper left corner of ${A_{ij}}$ is $({x_{ij}},{y_{ij}})$.

 figure: Fig. 7.

Fig. 7. Demonstration of ARAA.

Download Full Size | PDF

3.2 Autofocus algorithm

The key of AFA is to choose an appropriate definition evaluation function which is selected from nine commonly used ones: Tenengrad, Laplacian, PVA Grad, SMD, SMD2, Energy, Vollath, Entropy and FFT [10]. Take 21 crosshair images acquired in once experiment as inputs, the above 9 types of definition evaluation functions are calculated and then normalized, their broken lines are shown in Fig. 8, the larger the value, the higher the definition of the ROI, and the position with the highest definition can be regarded as the best focus position. In Fig. 8, except for the vertices of Vollath, Entropy and FFT these three lines, the vertices of the other six lines correspond to the same abscissa and are at the coarse focus position. Therefore, the conclusion can be drawn that the coarse focus position is the best focus position in this experiment. Then, by calculating the differential gradient values of the six broken lines at the best focus position, the definition evaluation function with the highest sensitivity can be selected. The formula to calculate the differential gradient value at n is as follows

$$\delta = |2f(n) - f(n - 1) - f(n + 1)|/2. $$

 figure: Fig. 8.

Fig. 8. Curves of 9 types of definition evaluation functions.

Download Full Size | PDF

From the differential gradient values at the best focus position shown in Fig. 8, it can be concluded that SMD2 has the best sensitivity at the best focus position. Therefore, we choose SMD2 definition evaluation function as the evaluation index.

3.3 Eccentric error and tilt error correction

The flow chart of EEC and TEC are shown in Fig. 9. The crosshair beams are respectively focused on the sphere center of the upper surface vertex and the sphere center image of the lower surface of the aspheric element. And the reflected crosshair image is formed on the image plane. Figure 10(a) shows the movement procedure of the sphere center of the upper surface vertex of the aspheric element when the element rotates around the spin axis $\alpha $, and under the condition that the eccentric error and tilt error exist at the same time. After performing EEC, there only exists tilt error, at this time, spin the element one more circle, and the movement procedure of the sphere center image of the lower surface vertex of the aspheric element is shown in Fig. 10(b). During EEC operation, first focus the crosshair light on the sphere center of the upper surface vertex, the reflected light will form a crosshair image on the CMOS image plane. On the image plane, the center of the crosshair corresponds to the sphere center of the object-side. The core of HPACM is to use PCCEA to accurately extract the pixel coordinates of the crosshair center, which will be introduced in section 3.4. In the process of $\alpha $ axis spinning 360° with 30° steps, the crosshair light reflected from the sphere center of vertex forms an image on the CMOS, which spins one circle around the spin axis. Use PCCEA to extract the center pixel coordinates of the 12 crosshair images collected during the spinning process, and according to at least 3 valid pixel coordinates, use the least squares circle fitting algorithm to fit the trajectory circle of the crosshair on the image plane, and the pixel radius corresponds to the value of the eccentric error on the image side. The extracted 12 pixel coordinates correspond to the sphere center ${C_0}\sim {C_{11}}$ of the vertices at 12 positions on the object-side during the spinning process. The spin center ${O_C}$ of ${C_0}\sim {C_{11}}$ is a point on the spin axis. As shown in Fig. 10(c), the distance from ${C_0}$ to ${O_C}$ is the eccentric error. According to formula (2), the pixel radius ${r_e}$ can be converted to the object distance from ${C_0}$ to ${O_C}$, and converted to the distances $\triangle {x_e},\triangle {y_e}$ along the X, Y direction, then control the $x,y$ axis to compensate the eccentric error.

 figure: Fig. 9.

Fig. 9. Flow chart of EEC and TEC.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (a) is the trajectory of the sphere center ${C_i}$ of the upper surface vertex when the aspherical element spins one circle around the spin axis, under the condition that the eccentric and tilt errors exist at the same time. and (b) is the trajectory of the sphere center image ${C_i}^\prime $ of the lower surface vertex after the EEC operation. (c) and (d) are the projections of the sphere center of the upper surface vertex and the sphere center image of the lower surface vertex of the component on the X-Y plane of the object during the spinning process.

Download Full Size | PDF

After EEC, the pose of the aspheric element will change from Fig. 10(a) to Fig. 10(b), and the tilt error needs to be corrected. According to the formula (11), calculate the intercept from the sphere center image of the lower surface vertex to the lower surface of the component, and then focus to this position. Same as EEC, rotate $\alpha $ axis one circle with the step of 30°, collect 12 ROI images of center of crosshair, using PCCEA to extract the center coordinates of them, and utilizing the least squares circle fitting algorithm to get the pixel position of the spin center ${O_C}^\prime $.

As shown in Fig. 10(d), the sphere center images of the lower surface vertex are represented by ${C_0}^\prime \sim {C_{11}}^\prime$ on the object side, and the spin center is ${O_C}^\prime $, calculate the pixel distance from ${C_0}^\prime$ to ${O_C}^\prime $ on the image plane, and convert them into the distances $\triangle {x_t},\triangle {y_t}$ of the object side, then convert the displacement values into angle values. According to Fig. 10(b), the swing radius ${r_{{\varphi _x}}},{r_{{\varphi _y}}}$ of ${\varphi _x},{\varphi _y}$ can be obtained by measuring in CAD software, and the distance ${h_{{\varphi _x}}},{h_{{\varphi _y}}}$ between the swing surface of the two swing shafts ${\varphi _x},{\varphi _y}$ and the lower surface of the component can be obtained as well. The swing centers of ${\varphi _x},{\varphi _y}$ are ${O_{{\varphi _x}}},{O_{{\varphi _y}}}$ respectively. Because the swing angle is very small, the ratio of the swing chord length and the swing arm length can be used as the approximate value of the swing angle, where the swing chord length are $\triangle {x_t},\triangle {y_t}$ and the swing arm lengths of ${\varphi _x},{\varphi _y}$ are ${O_{{\varphi _x}}}{C_{11}}^\prime $ and ${O_{{\varphi _y}}}{C_{11}}^\prime $ respectively. Therefore, the angles ${\varphi _x},{\varphi _y}$ that need to be compensated are respectively

$$\triangle {\varphi _x} = \triangle {x_t}/({h_{{\varphi _x}}} - {r_{{\varphi _x}}} + \textrm{|}{l_2}^\prime \textrm{|}), $$
$$\triangle {\varphi _y} = \triangle {y_t}/({h_{{\varphi _y}}} - {r_{{\varphi _y}}} + \textrm{|}{l_2}^\prime \textrm{|}). $$

3.4 Pixel coordinates of crosshair center extraction algorithm

As shown in Fig. 11, adaptive threshold binarization is performed on the ROI to obtain a binary image, and the mathematical principle of adaptive threshold binarization is as follows. Firstly, determine the size of the binarized single region module as $ksize$ which in our method is 17, and then calculate the gaussian weight value $T(x,y)$ for each pixel in the module.

$$T(x,y) = \sigma \cdot \exp [ - {(i(x,y) - (ksize - 1)/2)^2}/{(2 \cdot \theta )^2}]$$
where $i(x,y) = \sqrt {{x^2} + {y^2}} $, and $(x,y)$ is the pixel coordinate with the center of a single $ksize \times ksize$ area module as the origin, $\theta = 0.3 \cdot [(ksize - 1) \cdot 0.5 - 1] + 0.8$ and $\sigma $ satisfies $\sum {T(x,y) = 1}$. The rule of binarization is as follows
$$dst(x,y) = \left\{ {\begin{array}{c} {255,src(x,y) > T(x,y)}\\ {0,src(x,y) \le T(x,y)} \end{array}} \right.$$
where $dst(x,y)$ is the target binary image, and $src(x,y)$ is the original ROI image. Secondly, due to there exists noise in the background of the image which results in the stray connected domains on the binarized image, a small structure element is used to perform morphological eroding operation to erase them. The erase result is shown in Fig. 12(a). Based on the characteristics of the bright center and the dim surrounding of crosshair image, we draw the conclusion that the inscribed circle in the center area of the crosshair should be the largest. Hence, the problem becomes searching for the largest inscribed circle in the connected domain and obtain the coordinate of its center, which is equivalent to the crosshair center. The extraction result is shown in Fig. 12(b), and the coordinate position of the crosshair center on the global image can be obtained by combining the coordinate position $({x_{ij}},{y_{ij}})$ of the upper left corner of the ROI obtained by ARAA.

 figure: Fig. 11.

Fig. 11. Flow chart of PCCEA.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Images in the process of PCCEA, (a) is the image after morphological eroding operation and (b) is the extracted center point of crosshair.

Download Full Size | PDF

4. HPACM experiments

The centering system is shown in Fig. 13, all axes are calibrated and are shown in Table 1.

 figure: Fig. 13.

Fig. 13. Centering system

Download Full Size | PDF

Tables Icon

Table 1. The stroke range and the accuracy after correction of the six-dimensional adjustment table.

In the experiments, objective lens with a focal length of 200mm is selected, and the corresponding system magnification $\Gamma $ is 2, and the pixel size $\varepsilon $ of CMOS is 5.5um. We select two kinds of hyperboloid element #1, #2, and a parabolic element #3 as experimental samples. The parameters of them are listed in Table 2. In experiments, we verified the efficiency and robustness of ARAA by applying it to the collected crosshair images for center extraction. Verified PCCEA by applying it to extract the center coordinates of the extracted ROIs of crosshair center images. Verified HPACM by carrying out six times of EEC and TEC. Table 3 illustrates the hardware and software configuration of the experiments. The ${h_{{\varphi _x}}},{h_{{\varphi _y}}},{r_{{\varphi _x}}},{r_{{\varphi _y}}}$ of centering system are 165.5mm, 204.2mm, 125mm, and 163mm respectively.

Tables Icon

Table 2. Parameters of experimental samples

Tables Icon

Table 3. Software and hardware configuration

4.1 ARAA experiments

In order to reduce the time complexity of the PCCEA, ARAA is used to automatically search for the ROI where the center of the crosshair is located, and record the pixel coordinate of its upper left corner. In the experiments, both $H,W$ are set to 200 pixels, and the searching step S is set to 50 pixels. So far, we have conducted a large number of centering experiments, the ARAA has been executed more than 1000 times, and the effect is stable. The Fig. 14 shows the results of using ARAA to extract the center of the crosshair during the EEC or TEC process, the leftmost side of each row of images is an original image. The image of #1 is clear and bright, the image of #2 has two crosshair images (the blurred one is the surface reflection image of the lower surface). ARAA can filter out the interference image well. Due to surface normal aberration [11] of #3, the crosshair image is blur. It is necessary to use the GFA proposed in [11] to perform image enhancement before ARAA. It can be found that ARAA has strong robustness to different degrees of blurring and background image interference.

 figure: Fig. 14.

Fig. 14. (a)(b)(c) The first image of each row is a crosshair image of #1, #2 and #3 in the three groups of experiments, and the other four are the center ROIs of the crosshairs extracted by ARAA in each group of experiments.

Download Full Size | PDF

4.2 PCCEA experiments

In each set of EEC or TEC experiments, 12 crosshair images will be collected. According to the pixel coordinates of the crosshair centers, a circle with the mechanical spin axis as the center is fitted using least squares algorithm, and the extraction accuracy of the center coordinates directly affects whether the spin center can be accurately located. The following Fig. 15 shows the execution results of PCCEA. As can be seen that even for the #3 with very blurred imaging, the proposed PCCEA can still accurately find the center of the crosshair image, and add the center coordinates to the coordinates of the upper left corner of the ROI recorded in Section 4.1, the pixel coordinates of the center of the crosshair on the entire image can be obtained.

 figure: Fig. 15.

Fig. 15. (a)(b)(c) are the PCCEA results of #1, #2, and #3 respectively. The center of the circle corresponds to the center of the crosshairs extracted by our algorithm.

Download Full Size | PDF

4.3 Complete experimental demonstration of EEC or TEC

The operation process of eccentric and tilt error correction is similar. It is difficult to completely correct the errors in once, so the iteration termination condition is set, that is, the eccentricity error is no larger than 10um, and the tilt error is no larger than 1’. Figure 16 shows the change process of the center coordinates of the crosshairs during a complete EEC (TEC). In the first calibration, the 12 crosshair centers are shown in the red dots. After the first time of error correction, 12 crosshair centers are obtained again as shown in the green dots. Due to the system error of the mechanical shafting and random error of the component processing, the fitting result is distorted. After the error is corrected for the second time, the eccentricity or tilt deviation is very small, as shown by the innermost blue circle. Generally, after two rounds of correction, the amount of eccentricity or tilt error can meet the constraint conditions.

 figure: Fig. 16.

Fig. 16. The change process of the crosshair center on the image plane in the eccentric or tilt correction process.

Download Full Size | PDF

4.4 Repeatability experiments of HPACM

In order to verify the robustness of the proposed HPACM, we conducted two complete experiments on #1, #2, and #3 respectively. Figure 17(a) shows the results after EEC, and Fig. 17(b) shows the results after TEC. After two to three rounds of correction, the eccentricity error can be corrected to within 7um, and the tilt error can be corrected to within 30”. Under this centering accuracy, high-precision sub-aperture scanning imaging on the surface of aspheric components can be achieved, and the accuracy of quantitative detection results can be guaranteed [12].

 figure: Fig. 17.

Fig. 17. (a) is the repeatability experimental results of EEC, (b) is the repeatability experimental results of TEC.

Download Full Size | PDF

5. Conclusion

This paper proposes an efficient and robust high-precision automatic centering method for the aspherical optic components. In order to assist the automation of the system, AFA is proposed based on the definition evaluation function SMD2. In order to reduce the time complexity of image processing, ARAA is proposed, which efficiently and robustly extracts the ROI area where the crosshair center is located, and then records the pixel coordinate of its upper left corner. The PCCEA can accurately extract the pixel coordinate of the crosshair center on the ROI, combine with the pixel coordinate of the upper left corner of the ROI obtained in ARAA, the coordinate of the crosshair center on the whole image can be obtained. Then according to at least 3 center coordinates, fit the position of the mechanical spin axis, and perform EEC and TEC. In TEC process, the amount of displacement needs to be converted into the corresponding angle value according to the size parameters of the mechanical structure. In the experiments, the proposed HPACM can correct the eccentric and tilt error to within 7um and 0.5. Both the correction accuracy and the quantifiable result are incomparable to manual correction. This work is of great significance to the development of automatic detection equipment for surface defects of large-diameter aspherical components.

Funding

National Natural Science Foundation of China (61627825, 61875173).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Zhengbo Zhu, “Design Ultra-Compact Aspherical Lenses for Extended Sources Using Particle Swarm Optical Optimization Algorithm,” IEEE Photonics J. 11(6), 1–14 (2019). [CrossRef]  

2. Jieqiong Lin, “Development of an Aspherical Aerial Camera Optical System,” IEEE Photonics J. 11(6), 1–13 (2019). [CrossRef]  

3. C. Zhou, “Fiber-Optic Refractometer Based on a Reflective Aspheric Prism Rendering Adjustable Sensitivity,” J. Lightwave Technol. 37(4), 1381–1387 (2019). [CrossRef]  

4. F. Zhang, “Correlative Comparison of Three Ocular Axes to Tilt and Decentration of Intraocular Lens and Their Effects on Visual Acuity,” Ophthalmic Res. 63(2), 165–173 (2020). [CrossRef]  

5. J.T. Madden, “Integrated nonlinear optical imaging microscope for on-axis crystal detection and centering at a synchrotron beamline,” J. Synchrotron Radiat. 20(4), 531–540 (2013). [CrossRef]  

6. J.A. Newman, “Integrated Nonlinear Optical Microscope for Crystal Centering on a Synchrotron X-ray Beamline,” Microsc. Microanal. 19(S2), 1058–1059 (2013). [CrossRef]  

7. B. Matthias, “Lens centering of aspheres for high-quality optics,” Adv. Opt. Technol. 1(6), 1 (2012). [CrossRef]  

8. H., Christian and W. Paul, “Minimising Centering Errors of Injection-Compression Moulded Plastics Lenses,” Micro Nanosyst. 6(1), 34–41 (2014). [CrossRef]  

9. A. De Castro, P. Rosales, and S. Marcos, “Tilt and decentration of intraocular lenses in vivo from Purkinje and Scheimpflug imaging. Validation study,” J. Cataract Refractive Surg. 33(3), 418–429 (2007). [CrossRef]  

10. G. Liu, “Dispersion compensation method based on focus definition evaluation functions for high-resolution laser frequency scanning interference measurement,” Opt. Commun. 386, 57–64 (2016). [CrossRef]  

11. F. Wang, “A Machine Vision Method for Correction of Eccentric Error Based on Adaptive Enhancement Algorithm,” IEEE Trans. Instrum. Meas. 70, 1–11 (2021). [CrossRef]  

12. F. Wang, “Fast path planning algorithm for large-aperture aspheric optical elements based on minimum object depth and a self-optimized overlap coefficient,” Appl. Opt. 61(11), 3123–3133 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Categories of aspheric optical components.
Fig. 2.
Fig. 2. (a) is eccentric error and (b) is tilt error between the optical axis and spin axis.
Fig. 3.
Fig. 3. Optical#path diagram of EEC.
Fig. 4.
Fig. 4. Relevant parameters of single refractive aspheric paraxial imaging.
Fig. 5.
Fig. 5. Schematic diagram of automatic centering system.
Fig. 6.
Fig. 6. Flow chart of HPACM.
Fig. 7.
Fig. 7. Demonstration of ARAA.
Fig. 8.
Fig. 8. Curves of 9 types of definition evaluation functions.
Fig. 9.
Fig. 9. Flow chart of EEC and TEC.
Fig. 10.
Fig. 10. (a) is the trajectory of the sphere center ${C_i}$ of the upper surface vertex when the aspherical element spins one circle around the spin axis, under the condition that the eccentric and tilt errors exist at the same time. and (b) is the trajectory of the sphere center image ${C_i}^\prime $ of the lower surface vertex after the EEC operation. (c) and (d) are the projections of the sphere center of the upper surface vertex and the sphere center image of the lower surface vertex of the component on the X-Y plane of the object during the spinning process.
Fig. 11.
Fig. 11. Flow chart of PCCEA.
Fig. 12.
Fig. 12. Images in the process of PCCEA, (a) is the image after morphological eroding operation and (b) is the extracted center point of crosshair.
Fig. 13.
Fig. 13. Centering system
Fig. 14.
Fig. 14. (a)(b)(c) The first image of each row is a crosshair image of #1, #2 and #3 in the three groups of experiments, and the other four are the center ROIs of the crosshairs extracted by ARAA in each group of experiments.
Fig. 15.
Fig. 15. (a)(b)(c) are the PCCEA results of #1, #2, and #3 respectively. The center of the circle corresponds to the center of the crosshairs extracted by our algorithm.
Fig. 16.
Fig. 16. The change process of the crosshair center on the image plane in the eccentric or tilt correction process.
Fig. 17.
Fig. 17. (a) is the repeatability experimental results of EEC, (b) is the repeatability experimental results of TEC.

Tables (3)

Tables Icon

Table 1. The stroke range and the accuracy after correction of the six-dimensional adjustment table.

Tables Icon

Table 2. Parameters of experimental samples

Tables Icon

Table 3. Software and hardware configuration

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

z = c ( x 2 + y 2 ) 1 + 1 ( 1 + k ) c 2 ( x 2 + y 2 ) + i = 2 N A 2 i ( x 2 + y 2 ) i ,
Δ = r e ε / 2 Γ ,
I = ( l r ) u / r ,
I = n I / n ,
u = I + u I ,
l = r ( 1 + I / u ) .
u k = h k / l k .
u k = ( h k / r k ( n k n k + 1 ) + n k + 1 u k ) n k
l k = h k / u k .
l k = r k  -  1 ( r k d k 1 ) n k  -  1 ( n k  -  1 n k ) ( r k d k 1 ) + n k 1 n k r k 1 .
l 2 = r 1 ( r 2 d 1 ) n 1 ( n 1 n 2 ) ( r 2 d 1 ) + n 1 n 2 r 1 ,
S M D 2 = x , y = 1 , 1 i , j | f ( x , y ) f ( x 1 , y ) | | f ( x , y ) f ( x , y 1 ) | 255 i j .
δ = | 2 f ( n ) f ( n 1 ) f ( n + 1 ) | / 2.
φ x = x t / ( h φ x r φ x + | l 2 | ) ,
φ y = y t / ( h φ y r φ y + | l 2 | ) .
T ( x , y ) = σ exp [ ( i ( x , y ) ( k s i z e 1 ) / 2 ) 2 / ( 2 θ ) 2 ]
d s t ( x , y ) = { 255 , s r c ( x , y ) > T ( x , y ) 0 , s r c ( x , y ) T ( x , y )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.