Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Structured-light modulation analysis technique for contamination and defect detection of specular surfaces and transparent objects

Open Access Open Access

Abstract

With the rapid development of electronic industry, higher and higher requirements are placed on the surface quality of optical components in electronic devices. Structured-Light Modulation Analysis Technique (SMAT) was recently proposed to detect the contamination and defects on specular surfaces. In this paper, the proposed mechanisms and mathematical models of SMAT are analyzed and established based on the theory of photometry and the optical characteristics of contamination and defects for the first time. What’s more, a novel transmission system adopting SMAT is especially designed for the defect detection of transparent objects. For both reflection and transmission system, simulations and experiments were conducted, and comparative studies with uniform planar illumination were also carried out. Simulations on the influence of incident light source region showed that SMAT can eliminate the interference of ambient light while uniform planar illumination technique cannot. Experiments on samples with specular surface and transparent material demonstrated that the modulation values at the contamination and defects are much less than that at clean and intact place, and defects and contaminations were clearly distinguished based on SMAT, while they were almost indiscernible with uniform planar illumination. Therefore, SMAT can be applied to the whole-field inspection of optical components in industrial environments.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In order to meet the requirement for high quality optical components, quality control has become one of the most important subjects in the optical manufacturing industry. Surface defect is one of the biggest factors affecting the quality of optical components, and its existence sometimes brings disastrous consequences to optical components [1,2]. Therefore, many detection methods have been proposed for the surface detection of optical components.

Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM) are two instruments which are very suitable for the detection of surface defects on fine optics [3,4], but their detection area is too narrow to apply to the detection for large optical components.

Due to the limitation of the detection accuracy of existing methods, for manufacturing optical components, defect detection is mainly dominated by the human visual inspection method [5]. However, the detection efficiency of this method is very low, and the detection effect depends greatly on the worker’s eyesight and experience [6].

Machine vision is a promising detection method that can solve the problem of low efficiency and instability of the manual detection method. Based on machine vision, a number of detection methods have been developed depending on different optical principles. Some detection systems adopt polarized illumination and detect defects utilizing the polarization characteristics of defects on optical components [7,8]. In addition, the single sided diffraction of defects can also be used to detect defects on optical components [9]. However, the accuracy of the detection result of these two methods is not particularly ideal. Dark-field imaging is a popular research in the field of defect detection [1012], which utilizes the scattering light of defects. However, this method also encounters bottlenecks in the detection of shallow scratches and small defects [5]. If we want to get more accurate detection results, it is necessary to introduce a microscope to detect optical components, which leads to an increase in cost and time.

In recent years, detection methods using structured light for illumination have gradually become popular [13]. Phase Measuring Profilometry (PMP) is a typical method for the measurement of diffuse objects [14,15]. Modulation Measuring Profilometry (MMP) is a method that uses modulation to obtain the surface topography of diffuse objects [1618]. Phase Measuring Deflectometry (PMD) is similar to PMP in principle, which is applied to measure the surface topography of specular objects [19,20]. Based on PMD, some improved methods have been developed to adapt to various situations [21,22]. When PMD is used to detect specular objects, it is essential to get the wrapped phase and calculate the unwrapped phase in the detection process, and the calculation process is relatively complicated.

Inspired by PMD and MMP, the modulation detection system was proposed for the detection of specular surfaces [23]. We call this proposed method Structure-Light Modulation Analysis Technique (SMAT). SMAT can be applied to detect contamination and defects, such as grease, dust, scratches, pits, cracks and breakages. In Ref. [23], SMAT was applied in the specular inspection for the first time, and polarized structured light illumination and a linear polarizer are adopted in that system to detect the defects of specular surface with eliminating dust. However, the detailed mechanism of SMAT had not been analyzed in Ref. [23]. In Section 2 below, the proposed mechanism and mathematical model of SMAT are analyzed and established based on the theory of photometry and the optical characteristics of contamination and defects. Moreover, a novel transmission system is especially designed for the defect detection of transparent objects. Section 3 simulates from two aspects: the increase of incident light region and the deviation of incident light source position. Finally, the effectiveness of SMAT is verified by the experiments in Section 4. Section 5 outlines the conclusions of this work.

2. Principle

The basic principle of SMAT is expounded in Section 2.1. The proposed mechanisms and mathematical models for specular surfaces and transparent objects are analyzed and established in Section 2.2 and Section 2.3. Systems used for comparison are shown in Section 2.4.

2.1 Principle of structured-light modulation analysis technique

The system structure of SMAT is shown in Fig. 1. According to the law of reflection, the angle between the normal line of display screen and the horizontal plane, and the angle between the camera’s optical axis and the horizontal plane, are set to the same value.

 figure: Fig. 1.

Fig. 1. The structure of the reflection system.

Download Full Size | PDF

The light intensity projected by display screen can be expressed as:

$${I_n}(x,y) = A(x,y) + B(x,y) \cdot \cos [{\varphi (x,y) + {\delta_n}} ]$$
$A(x,y)$ is the average light intensity of fringe pattern. $B(x,y)$ reflects the contrast of fringe pattern. $\varphi (x,y) = 2\pi fx$ is initial phase distribution. f is spatial frequency. ${\delta _n} = \frac{n}{N}2\pi$ is phase shift size. N is the number of phase shift steps.

Assuming that the surface reflectivity of the specular object is 100%, the light intensity of the distorted fringe patterns reflected by specular surface can be expressed as:

$$I_n^{\prime}(x,y) = A(x,y) + B(x,y) \cdot \cos [{\varphi (x,y) + \varphi^{\prime}(x,y) + {\delta_n}} ]$$
$\varphi ^{\prime}(x,y)$ is the additional phase introduced by the topography of specular object.

The modulation equation is [16]:

$$M(x,y) = \sqrt {{{(\sum\limits_{n = 0}^{N - 1} {I_n^{\prime}(x,y) \cdot \cos {\delta _n}} )^2}} + {{(\sum\limits_{n = 0}^{N - 1} {I_n^{\prime}(x,y) \cdot \sin {\delta _n}} )^2}}}$$
Substituting Eq. (2) into Eq. (3), the simplified modulation equation is:
$$M(x,y) = \frac{N}{2}B(x,y)$$
When there is contamination or defect on the specular surface or transparent object, the modulation value will change. The following are the mechanism analyses of the modulation at the place where has contamination or defect based on the theory of photometry.

2.2 Proposed mechanism and mathematical model of the structured-light modulation analysis technique for the detection of specular surfaces with contamination and defects

Optical path reversible theorem and determined reflected light rays are adopted to analyze the illumination for one pixel in camera imaging target. Contamination and defects will cause the topography change of specular surfaces [24], which will change the relation of light rays. The original light relation and the altered light relation are shown in Fig. 2(a) and Fig. 2(b), respectively. On the basis of ignoring the influence of Point Spread Function (PSF) of the imaging system, the modulation equation is derived as follows.

 figure: Fig. 2.

Fig. 2. The light reflection relation (a) on the clean and intact specular surface, (b) when there is contamination or defect.

Download Full Size | PDF

Assuming that the reflected light rays received by one pixel of camera imaging target come from a cell zone on display screen (Cell Zone 2) when the specular surface is clean and intact, as shown in Fig. 2(a). Cell Zone 2 is assumed to be a uniform illumination light source. The region on the specular surface captured by one pixel of camera imaging target is called Cell Zone 0. When there is contamination or defect, the specular surface shows partially diffuse reflection characteristics, which diverges the light source region from the original cell zone to a large region, as shown in Fig. 2(b). This large light source region consists of many Cell Zone 2. The region on the specular surface captured by one pixel of camera imaging target is called Cell Zone 1. It is reasonable to assume that a portion of incident light rays in Cell Zone 1 come from the display screen, and others come from the environment.

$T$ is the spatial period of the fringe pattern on display screen, $cell$ indicates the length and width of Cell Zone 2. ${L_0}$ represents the distance between the specular surface and the display screen. All of the analyses below are based on one pixel of camera imaging target.

The incident luminous flux on the specular surface can be calculated based on the luminance of fringe patterns, and the reflected luminous flux captured by one pixel of camera imaging target can be obtained by multiplying the incident luminous flux by the reflection coefficient. The reflected luminous flux captured by one pixel is equal to the light intensity $I_n^{\prime}(x,y)$ in Eq. (3). Then, a new modulation equation can be calculated by substituting the reflected luminous flux equation into Eq. (3).

The display screen used for illumination is a Lambertian radiator with the characteristic of anisotropic illumination. The illuminance of the planar light source at a distance r is [25]:

$$E = \frac{{Ld{A_s}\cos {\theta _1}\cos {\theta _2}}}{{{r^2}}}$$
$L$ represents the luminance of planar light source, $d{A_s}$ indicates the area of the element of light source, ${\theta _1}$ is the solid angle between the normal line of the light source element and the light ray emitted by this element region, and ${\theta _2}$ is the solid angle between the normal line of the illuminated plane and the light ray from the element region of light source. $d{A_s}$ can be considered as the area of Cell Zone 2, and is set to a constant value ${A_s}$.

The relations between incident light rays and reflected light rays are showed in Fig. 3(a) and Fig. 3(b). $\gamma$ represents the phase shift size compared to the original light source position, and the direction closer to the camera is the positive direction. k indicates a deviation distance compared to the original light source position in the direction perpendicular to Fig. 3(a), as shown in Fig. 3(b), which is measured in units of the length and width of Cell Zone 2, and the right direction is the positive direction. In $\gamma$ direction, the included angle between the display screen and the horizontal plane is $\zeta$. The solid angle between the normal line of display screen and the normal line of the clean and intact illuminated surface is also $\zeta$. In k direction, the light rays from the display screen are shown in Fig. 3(b). In the equations below, the superscript $(R)$ identifies the parameters in the fringe reflection system.

 figure: Fig. 3.

Fig. 3. The relation between incident light rays and reflected light rays (a) in $\gamma$ direction, (b) in k direction. Green lines are the incident light rays when the surface is clean and intact, and red lines indicate the incident light rays when there is contamination or defect. (c) The situation of the incident light.

Download Full Size | PDF

When there is contamination or defect on the detected specular surface, the angle between the incident light ray ($line\_i$) and the normal line of display screen ($line\_l$) can be calculated:

$$\theta _1^{(R )} = \arctan \frac{{\sqrt {{{({g \cdot cell} )}^2} + {{({k \cdot cell} )}^2}} }}{{{L_0}}}$$
Let’s $g = \frac{\gamma }{{2\pi }}T$, g is the displacement in $\gamma$ direction on the plane of display screen, and it’s also measured in units of the length and width of Cell Zone 2.

According to the law of reflection, ${\theta _2}$ is just half of the included angle between the incident ray $line\_i$ and the reflected ray $line\_r$. So we can first calculate the solid angle between $line\_i$ and $line\_r$ ($2{\theta _2}$), and then ${\theta _2}$ can be deduced.

The incident light ray $line\_i$ is:

$$z ={-} x\cot \left[ {\zeta - \arctan \left( {\frac{{g \cdot cell}}{{{L_0}}}} \right)} \right] = y\frac{{{L_0}\cos \zeta + g \cdot cell \cdot \sin \zeta }}{{k \cdot cell}}$$
The reflected light ray $line\_r$ is:
$$z = x\cot \zeta$$
$\cos 2{\theta _2}$ can be calculated according to the vector equation in solid geometry. ${\theta _2}$ is:
$$\theta _2^{(R )} = \frac{1}{2}\arccos \left( {\frac{{ - d_1^{(R )} + \cot \zeta }}{{\sqrt {[{{{({d_1^{(R )}} )}^2} + {{({d_2^{(R )}} )}^2} + 1} ]({1 + {{\cot }^2}\zeta } )} }}} \right)$$
$$d_1^{(R )} = \frac{1}{{\cot \left[ {\zeta - \arctan \left( {\frac{{g \cdot cell}}{{{L_0}}}} \right)} \right]}}$$
$$d_2^{(R )} = \frac{{k \cdot cell}}{{{L_0}\cos \zeta + g \cdot cell \cdot \sin \zeta }}$$
$r$ is:
$${r^{(R )}} = \sqrt {{{({g \cdot cell} )}^2} + {{({k \cdot cell} )}^2} + L_0^2} $$
The position of the light source on display screen changes, and the luminance becomes:
$$L_n^{(R )} = A(x,y) + B(x,y) \cdot \cos [{\varphi (x,y) + {\delta_n} + \gamma } ]$$
In the fan-shaped range of light received at the contamination or defect, a fraction of illuminance on Cell Zone 1 comes from a Cell Zone 2 on display screen, as shown in Fig. 2(b), and the illuminance comes from this Cell Zone 2 is:
$$E_n^{(R )}(x,y) = \frac{{L_n^{(R )}{A_s}\cos \theta _1^{(R )}\cos \theta _2^{(R )}}}{{{{({{r^{(R )}}} )}^2}}}$$
Cell Zone 1 can be divided into two regions, as shown in Fig. 3(c). In the first region, the incident light comes from the display screen, while in the second region, the incident light comes from the environment. Suppose that the area proportion of the first region in Cell Zone 1 is v. The first region can be further divided into $\sigma$ smaller regions with equal area, and it is assumed that the incident light of each small region comes from a Cell Zone 2 on display screen. s is the area of Cell Zone 1. Let the illuminance from environment be $C(x,y)$.

In Cell Zone 1, the reflected luminous flux is:

$$\Phi _{nR}^{(R )}(x,y) = \alpha s\left( {\frac{v}{\sigma }\sum\limits_{k = {\sigma_{k0}}}^{{\sigma_k}} {\sum\limits_{\gamma = {\sigma_{\gamma 0}}}^{{\sigma_\gamma }} {\frac{{L_n^{(R )}{A_s}\cos \theta_1^{(R )}\cos \theta_2^{(R )}}}{{{{({{r^{(R )}}} )}^2}}}} } + (1 - v)C(x,y)} \right)$$
$\alpha$ is the surface reflectivity of the specular surface.

If the specular surface is clean and intact, the area of Cell Zone 0 is ${s_0}$, and the surface reflectivity of Cell Zone 0 is ${\alpha _0}$, the reflected luminous flux equation is:

$$\Phi _{n0}^{(R )}(x,y) = \frac{{{\alpha _0}{s_0}{A_s}\cos \zeta }}{{L_0^2}}\{{A(x,y)\textrm{ + }B(x,y)\cos [{\varphi (x,y) + {\delta_n}} ]} \}$$
By substituting Eq. (15) into Eq. (3), the modulation equation can be obtained:
$${M^{(R)}}(x,y) = \frac{{\alpha sv{A_s}NB(x,y)}}{{2\sigma }}\sqrt {\begin{array}{{c}} {{{\left( {\sum\limits_{k = {\sigma _{k0}}}^{{\sigma _k}} {\sum\limits_{\gamma = {\sigma _{\gamma 0}}}^{{\sigma _\gamma }} {\frac{{\cos \theta _1^{(R)}\cos \theta _2^{(R)}}}{{{{({r^{(R)}})}^2}}} \cdot \cos [\varphi (x,y) + \gamma ]} } } \right)}^2} + }\\ {{{\left( {\sum\limits_{k = {\sigma _{k0}}}^{{\sigma _k}} {\sum\limits_{\gamma = {\sigma _{\gamma 0}}}^{{\sigma _\gamma }} {\frac{{\cos \theta _1^{(R)}\cos \theta _2^{(R)}}}{{{{({r^{(R)}})}^2}}} \cdot \sin [\varphi (x,y) + \gamma ]} } } \right)}^2}} \end{array}}$$
If the specular surface is clean and intact, Eq. (17) can be simplified as:
$$M_0^{(R )}(x,y) = \frac{{{\alpha _0}{s_0}{A_s}NB(x,y)\cos \zeta }}{{2L_0^2}}$$

2.3 Proposed mechanism and mathematical model of the structured-light modulation analysis technique for the detection of transparent objects with contamination and defects

The transmission system is designed to detect smooth transparent objects with parallel front and back surfaces. Set the plane of the object parallel to the plane of display screen, and make the optical axis of camera is perpendicular to the object plane and the display screen plane. The system structure is shown in Fig. 4. This transmission system has the similar mechanism as the reflection system.

 figure: Fig. 4.

Fig. 4. The structure of the transmission system.

Download Full Size | PDF

Contamination and defects cause a change in the topography of smooth transparent objects [24], and this change will change the original light transmission relation. The original light relation and the altered light relation are shown in Fig. 5(a) and Fig. 5(b), respectively. On the basis of ignoring the influence of Point Spread Function (PSF) of image system, the modulation equation is derived as follows.

 figure: Fig. 5.

Fig. 5. The light transmission relation (a) of the clean and intact smooth transparent object, (b) when there is contamination or defect.

Download Full Size | PDF

$T$ is the spatial period of fringe pattern, $cell$ indicates the length and width of the Cell Zone 2. ${L_0}$ represents the distance between the detected transparent object and the display screen. All of the analyses below are based on one pixel of camera imaging target.

Due to the symmetry of the transmission system, only Fig. 6(a) is used to represent the light ray relation, and all parameters below except $\gamma$ can be used as positive values. $\gamma$ represents the phase shift size relative to the position of the original light source. k indicates a deviation distance relative to the position of the original light source in the direction perpendicular to Fig. 6(a), which is measured in units of the length and width of Cell Zone 2. In the equations below, the superscript $(T)$ identifies the parameters in the fringe transmission system.

 figure: Fig. 6.

Fig. 6. (a) The relation between incident light rays and transmitted light rays. Green line is the incident light ray when the object is clean and intact, and red line represents the incident light ray when there is contamination or defect. (b) The situation of the incident light.

Download Full Size | PDF

According to Fig. 6(a), combined with the law of refraction, the refraction equation is:

$$\frac{{\sin \theta _2^{(T )}}}{{\sin c}} = n$$
$\theta _2^{(T )}$ is the included angle between the normal line of the illuminated plane and the light ray emitted by display screen, c is the deflection angle between the clean and intact plane and the plane with contamination or defect, n is the relative refraction index of object to air.

According to the geometric relation in Fig. 6(a), we can get:

$$\frac{{\sin ({c + \theta_1^{(T )}} )}}{{\sin c}} = n$$
$$\theta _1^{(T )} = \arctan \frac{{\sqrt {{{({g \cdot cell} )}^2} + {{({k \cdot cell} )}^2}} }}{{{L_0}}}$$
Let’s $g = \frac{\gamma }{{2\pi }}T$, g is the displacement in $\gamma$ direction on the plane of display screen, and it’s also measured in units of the length and width of Cell Zone 2.

Solve c from Eq. (20) and solve r according to the light relation in Fig. 6(a):

$${c^{(T )}} = {\mathop{\textrm {arccot}}\nolimits} \left( {\frac{n}{{\sin \theta_1^{(T )}}} - \cot \theta_1^{(T )}} \right)$$
$${r^{(T )}} = \sqrt {{{({g \cdot cell} )}^2} + {{({k \cdot cell} )}^2} + L_0^2} $$
From above results, we can get:
$$\theta _2^{(T )} = {c^{(T )}} + \theta _1^{(T )}$$
The position of the light source on display screen changes, and the luminance becomes:
$$L_n^{(T )} = A(x,y) + B(x,y) \cdot \cos [{\varphi (x,y) + {\delta_n} + \gamma } ]$$
Let $d{A_s}$ be a constant value ${A_s}$. In the fan-shaped range of light received at the contamination or defect, a fraction of illuminance on Cell Zone 1 comes from a Cell Zone 2 on display screen, as shown in Fig. 5(b), and the illuminance comes from this Cell Zone 2 is:
$$E_n^{(T )}(x,y) = \frac{{L_n^{(T )}{A_s}\cos \theta _1^{(T )}\cos \theta _2^{(T )}}}{{{{({{r^{(T )}}} )}^2}}}$$
The detailed situation of incident light is shown in Fig. 6(b). Suppose that the area proportion of the first region in Cell Zone 1 is v. In Cell Zone 1, the first region can be further divided into $\sigma$ smaller regions with equal area, and it is assumed that the incident light of each small region comes from a Cell Zone 2 on display screen. s is the area of Cell Zone 1. Let the illuminance from environment be $C(x,y)$.

In Cell Zone 1, the transmitted luminous flux is:

$$\Phi _{nT}^{(T )}(x,y) = \beta s\left( {\frac{v}{\sigma }\sum\limits_{k = {\sigma_{k0}}}^{{\sigma_k}} {\sum\limits_{\gamma = {\sigma_{\gamma 0}}}^{{\sigma_\gamma }} {\frac{{L_n^{(T )}{A_s}\cos \theta_1^{(T )}\cos \theta_2^{(T )}}}{{{{({{r^{(T )}}} )}^2}}}} } + (1 - v)C(x,y)} \right)$$
$\beta$ is the transmissivity of the transparent object.

If the transparent object is clean and intact, the area of Cell Zone 0 is ${s_0}$, the transmissivity of Cell Zone 0 is ${\beta _0}$, the luminous flux equation is:

$$\Phi _{n0}^{(T )}(x,y) = \frac{{{\beta _0}{s_0}{A_s}}}{{L_0^2}}\{{A(x,y)\textrm{ + }B(x,y)\cos [{\varphi (x,y) + {\delta_n}} ]} \}$$
By substituting Eq. (27) into Eq. (3), the modulation equation of the transmitted light in one pixel can be attained:
$${M^{\left( T \right)}}(x,y) = \frac{{\beta sv{A_s}NB(x,y)}}{{2\sigma }}\sqrt{\begin{array}{l} {\left( {\sum\limits_{k = {\sigma _{k0}}}^{{\sigma _k}} {\sum\limits_{\gamma = {\sigma _{\gamma 0}}}^{{\sigma _\gamma }} {\frac{{\cos \theta _1^{\left( T \right)}\cos \theta _2^{\left( T \right)}}}{{{{\left( {{r^{\left( T \right)}}} \right)}^2}}} \cdot \cos \left[ {\varphi (x,y) + \gamma } \right]} } } \right)^2 +} \\ {\left( {\sum\limits_{k = {\sigma _{k0}}}^{{\sigma _k}} {\sum\limits_{\gamma = {\sigma _{\gamma 0}}}^{{\sigma _\gamma }} {\frac{{\cos \theta _1^{\left( T \right)}\cos \theta _2^{\left( T \right)}}}{{{{\left( {{r^{\left( T \right)}}} \right)}^2}}} \cdot \sin \left[ {\varphi (x,y) + \gamma } \right]} } } \right)^2} \end{array}} $$
If the transparent object is clean and intact, the Eq. (29) can be simplified as:
$$M_0^{(T )}(x,y) = \frac{{{\beta _0}{s_0}{A_s}NB(x,y)}}{{2L_0^2}}$$

2.4 The reflection system and the transmission system with uniform planar light illumination

In order to evaluate the quality of modulation results based on the fringe structured light, the detection systems using uniform planar light source for illumination are designed to simulate the detection results under the normal illumination. The system structures are the same as above mentioned systems, except that the light pattern projected by display screen is different. The mechanisms are also analyzed based on one pixel of camera imaging target.

In the equations below, the superscript $(W)$ identifies the parameters in both the reflection system and the transmission system. The superscript $(RW)$ and $(TW)$ identify the parameters in the reflection system and transmission system, respectively.

The luminance equation is:

$${L^{(W )}}(x,y) = A(x,y) + B(x,y)$$
Based on Eq. (15), the reflected luminous flux is:
$${\Phi ^{({RW} )}}(x,y) = \alpha s\left( {\frac{v}{\sigma }\sum\limits_{k = {\sigma_{k0}}}^{{\sigma_k}} {\sum\limits_{\gamma = {\sigma_{\gamma 0}}}^{{\sigma_\gamma }} {\frac{{{L^{(W )}}{A_s}\cos \theta_1^{(R )}\cos \theta_2^{(R )}}}{{{{({{r^{(R )}}} )}^2}}}} } + (1 - v)C(x,y)} \right)$$
$\theta _1^{(R )}$, $\theta _2^{(R )}$ and ${r^{(R )}}$ as shown in Eq. (6), Eq. (9) and Eq. (12).

If the region on the specular surface which taken by one pixel in camera imaging target is clean and intact, the luminous flux equation becomes:

$$\Phi _0^{({RW} )}(x,y) = \frac{{{\alpha _0}{s_0}{A_s}\cos \zeta }}{{L_0^2}}[{A(x,y) + B(x,y)} ]$$
In the transmission system, the transmitted luminous flux can be obtained from Eq. (27):
$${\Phi ^{({TW} )}}(x,y) = \beta s\left( {\frac{v}{\sigma }\sum\limits_{k = {\sigma_{k0}}}^{{\sigma_k}} {\sum\limits_{\gamma = {\sigma_{\gamma 0}}}^{{\sigma_\gamma }} {\frac{{{L^{(W )}}{A_s}\cos \theta_1^{(T )}\cos \theta_2^{(T )}}}{{{{({{r^{(T )}}} )}^2}}}} } + (1 - v)C(x,y)} \right)$$
$\theta _1^{(T )}$, $\theta _2^{(T )}$ and ${r^{(T )}}$ as shown in Eq. (21), Eq. (24) and Eq. (23).

If the region on the transparent object which taken by one pixel in camera imaging target is clean and intact, the luminous flux equation becomes:

$$\Phi _0^{({TW} )}(x,y) = \frac{{{\beta _0}{s_0}{A_s}}}{{L_0^2}}[{A(x,y)\textrm{ + }B(x,y)} ]$$

3. Simulations

The change of the incident light region can be divided into two types, one is the increase of the incident light source region, and the other is the deviation of the incident light source position. From these two perspectives, the luminous flux simulation with uniform planar light illumination and the modulation simulations are conducted as follows.

By observing Eqs. (17), (29), (32) and (34), it can be found that, except for some system parameters, the factor v in modulation equations and luminous flux equations may also affect the results of these equations. Therefore, we can observe the changes of these equations by adjusting the value of v. The specific changes of light sources are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. The increase of the incident light source region (a) with structured light illumination, (b) with uniform planar light illumination. The deviation of the incident light source region (c) with structured light illumination, (d) with uniform planar light illumination. The orange boxes represent the incident light source region when the object is clean and intact, and the red boxes indicate the incident light source region when there is contamination or defect.

Download Full Size | PDF

3.1 The influence with the increase of the incident light source region

The situations of the increase of the incident light source region are shown in Figs. 7(a)–7(b). The side length of the orange region is set to 1, and that of the red region is set to R. The proportion of the first region in Fig. 3(c) and Fig. 6(b) is v.

Firstly, calculate the modulation results with the increase of R. These modulation results can be divided by the modulation value when the object is clean and intact, and the results of division can be called relative modulation. Secondly, by adjusting v, we can observe the modulation changes in the case when the area proportion of the first region in Cell Zone 1 is different. Likewise, the relative luminous flux under the uniform planar light illumination can also be calculated by similar two steps. Figure 8(a) and Fig. 8(b) are attained by Eqs. (17)–(18) and Eqs. (32)–(33), respectively.

 figure: Fig. 8.

Fig. 8. In the reflection system, (a) shows the relative modulation, (b) shows the relative luminous flux. A single line indicates the tendency of modulation and luminous flux with the increase of R, and the different lines represent modulation and luminous flux at different v.

Download Full Size | PDF

From Fig. 8(a), it can be discovered that with the increase of the object topography change caused by the contamination or defect, the incident light source region increases and the modulation results tend to decrease. And the lower the v, the lower the modulation value. With the increase of the incident light source region, no matter what the area ratio v is, the modulation value gradually decrease until hovering near the zero point. From Fig. 8(b), it can be found that with the increase of the object topography change, the incident light source region increases and the luminous flux results tend to stable values. And the lower the v, the higher the stable value. When the luminous flux from environment is large enough, the place where has contamination or defect may even be brighter than the place where is clean and intact. The luminous flux results are severely affected by ambient light. Nearly consistent conclusions can be drawn from the simulation results of the transmission system.

3.2 The influence with the deviation of the incident light source position

The incident light source region deviates as shown in Figs. 7(c)–7(d). The side length of the orange region is set to 1, and that of the red region is also set to 1.

The modulation results are calculated with the deviation of the incident light source position. These modulation results can be divided by the modulation value when the object is clean and intact. Likewise, the relative luminous flux under the uniform planar illumination can be obtained by similar calculation process. Figures 9(a)–9(b) and Fig. 9(c) are attained by Eqs. (17)–(18) and Eqs. (29)–(30) respectively.

 figure: Fig. 9.

Fig. 9. In the reflection system, (a) the modulation changes with the deviation of the incident light source region, (501,501) is the coordinates of the original planar light source, (b) the region where the modulation value is higher than original value. In the transmission system, (c) the modulation change with the deviation of the incident light source region, (501,501) is the coordinates of the original planar light source.

Download Full Size | PDF

From Figs. 9(a)–9(b), it can be seen that when the offset position of planar light source is within a certain small region where $g > 501$, as shown in Fig. 9(b), the relative modulation will rise to more than 1. In the rest region, the modulation value will decrease with the increase of the deviation of planar light source. However, in the transmission system, it can be found that the increase of the offset position of planar light source will bring down the relative modulation value, and the downward trends of modulation in $\gamma$ direction and k direction are completely uniform, as shown in Fig. 9(c). From the simulation results of luminous flux based on Eqs. (32)–(35), it can be found that the deviation of light source position has exactly the same effect on luminous flux results as it does on modulation results.

3.3 Analysis of simulation results

From Section 3.1 and 3.2, it can be found that, compared with the uniform light illumination systems, the results obtained by the modulation detection systems are much less affected by ambient light (the value of $v$), and the modulation value in the region with contamination or defect has an extremely high probability to being smaller than normal modulation value. However, in the uniform light illumination systems, the luminous flux is greatly affected by ambient light. Due to the light from environment, it is usually impossible to extract the information of contamination and defects from the luminous flux results.

The scratches and the profile errors both have an impact on the modulation results. However, most of profile errors have a high probability to cause weaker change in the modulation compared to the scratches. This is because scratches usually lead to the increase of light source region, while profile errors usually result in the deviation of light source position.

4. Experiments

The system structures used for detection are shown in Figs. 1 and 4. A glass plate is used for detection in the experiments below. There are ten scratches on the glass plate, which is made by chemical sketching. The width of the scratches varies from 5 to 50 $\mu m$, with the length 5 $mm$ and the depth 2 $\mu m$. The difference in width between two adjacent scratches is 5 $\mu m$. The reflection system comprises a display screen (Philips 246V6QSB, 1920 ${\times}$ 1080 pixels, 0.272 $mm$ pixel spacing in both directions) and a single camera (AVT Prosilica GT1600, 25 $mm$ focal length, 1600 ${\times}$ 1200 pixels, 3.69 $\mu m$ pixel spacing). The transmission system comprises a camera (AVT Manta G-917B, 25 $mm$ focal length, 3384 ${\times}$ 2710 pixels, 5.5 $\mu m$ pixel spacing) and the same screen as the reflection system.

4.1 Contamination and defect detection for glass plate based on the reflection system and the transparent system

The detection results based on the reflection system and the transmission system are shown in Fig. 10. Figures 10(a)–10(d) are the results after contrast stretching, and the grayscale is stretched to 0-255. The extra black points on Figs. 10(a) and 10(c) are actually dust. The comparison result of the red line part in Fig. 10(a) and the blue line part in Fig. 10(b) is shown in Fig. 10(e), and that of the red part in Fig. 10(c) and the blue part in Fig. 10(d) is shown in Fig. 10(f).

 figure: Fig. 10.

Fig. 10. In the reflection system, (a) the modulation result, (b) the luminous flux result. In the transmission system, (c) the modulation result, (d) the luminous flux result. The comparison of modulation and luminous flux (e) in the reflection system, (f) in the transmission system.

Download Full Size | PDF

From Figs. 10(a)–10(d), it can be found that the modulation result can clearly reflect the information of the contamination and defects on objects while the luminous flux result cannot. It is known that this phenomenon is caused by ambient light. Ambient light is an unavoidable issue in the direct detection process, but can be well suppressed in modulation results. As can be found from Figs. 10(e)–10(f), the modulation information of the contamination and defects on glass plate is obvious, and the scratches on the glass plate are in a uniform arrangement, which is consistent with the actual situation of glass plate surface.

4.2 Advantages and disadvantages of SMAT based on the reflection system and the transmission system

SMAT based on the reflection system or transmission system has a significant advantage: the detection results are almost unaffected by ambient light, which makes it suitable for optical component inspection in industrial environments. The modulation values at the contamination and defects are much less than that at clean and intact place. Defects and contamination can be clearly distinguished based on SMAT, while they are almost indiscernible with uniform planar illumination.

It should be noted that in the reflection system, the double reflection of the front and back surfaces of transparent objects will cause the appearance of parasitic fringes [26], and will lead to double images in final modulation results, as shown in Fig. 10(a). This phenomenon does not occur in the transmission system due to the difference in light paths.

The transmission system also has a better detection effect on the large-curvature regions of transparent objects compared with the reflection system. The transmission system can collect the light in large-curvature regions, while the reflection system cannot.

In addition, the reflection system is usually an oblique axis system. When the depth of field does not meet the requirements, some regions on objects will be blurred. The transmission system can solve the unevenness of image clarity caused by the difference in the distance between the detected object surface and the camera.

The reflection system also has some advantages that the transmission system cannot replace it. In the face of non-transparent specular objects, only the reflection system can be applied to detect the surface quality of objects.

5. Conclusion

In this study, the proposed mechanisms and mathematical models of SMAT are analyzed and established, and a novel transmission system is proposed for the detection of transparent objects. The display screen projects a series of phase-shifted structured light, and the camera captures the structured light reflected by specular surfaces or transmitted through transparent objects. Based on these captured images, the modulation result can be calculated, from which the information of the contamination and defects of detected object can be extracted. The photometric modulation equations of SMAT are established based on the theory of photometry and the optical characteristics of contamination and defects, and the luminous flux equations under uniform illumination are constructed at the same time. Simulations on the influence of incident light source region showed that SMAT can eliminate the interference of ambient light while uniform planar illumination technique cannot. Experiments on samples with specular surface and transparent material demonstrated that the modulation values at the contamination and defects are much less than that at clean and intact place, and defects and contamination were clearly distinguished based on SMAT, while they were almost indiscernible with uniform planar illumination. Therefore, SMAT can be applied to the whole-field inspection of optical components in industrial environments.

Funding

National Natural Science Foundation of China (61875035); Applied Basic Research Program of Sichuan Province (2018JY0579).

Disclosures

The authors declare no conflicts of interest.

References

1. S. G. Demos, M. Staggs, K. Minoshima, and J. Fujimoto, “Characterization of laser induced damage sites in optical components,” Opt. Express 10(25), 1444–1450 (2002). [CrossRef]  

2. B. Ma, Y. Zhang, H. Ma, H. Jiao, X. Cheng, and Z. Wang, “Influence of incidence angle and polarization state on the damage site characteristics of fused silica,” Appl. Opt. 53(4), A96–A102 (2014). [CrossRef]  

3. S. Gomez, K. Hale, J. Burrows, and B. Griffiths, “Measurements of surface defects on optical components,” Meas. Sci. Technol. 9(4), 607–616 (1998). [CrossRef]  

4. H. Ota, M. Hachiya, Y. Ichiyasu, and T. Kurenuma, “Scanning surface inspection system with defect-review SEM and analysis system solutions,” Hatachi Review 55, 78–82 (2006).

5. H. Zhang, Z. Wang, and H. Fu, “Automatic scratch detector for optical surface,” Opt. Express 27(15), 20910–20927 (2019). [CrossRef]  

6. X. Tao, Z. Zhang, F. Zhang, and D. Xu, “A novel and effective surface flaw inspection instrument for large-aperture optical elements,” IEEE Trans. Instrum. Meas. 64(9), 2530–2540 (2015). [CrossRef]  

7. T. A. Germer, “Angular dependence and polarization of out-of-plane optical scattering from particulate contamination, subsurface defects, and surface microroughness,” Appl. Opt. 36(33), 8798–8805 (1997). [CrossRef]  

8. Z. Gu, “Detection of a small defect on a rough surface,” Opt. Lett. 23(7), 494–496 (1998). [CrossRef]  

9. V. M. Schneider, M. Meljnek, and K. T. Gahagan, “Fast detection of single sided diffracted defects in display glass,” Measurement 42(4), 638–644 (2009). [CrossRef]  

10. K. Rebner, M. Schmitz, B. Boldrini, A. Kienle, D. Oelkrug, and R. W. Kessler, “Dark-field scattering microscopy for spectral characterization of polystyrene aggregates,” Opt. Express 18(3), 3116–3127 (2010). [CrossRef]  

11. X. Tao, D. Xu, Z. Zhang, F. Zhang, X. Liu, and D. Zhang, “Weak scratch detection and defect classification methods for a large-aperture optical element,” Opt. Commun. 387, 390–400 (2017). [CrossRef]  

12. D. Liu, S. Wang, P. Cao, L. Li, Z. Cheng, X. Gao, and Y. Yang, “Dark-field microscopic image stitching method for surface defects evaluation of large fine optics,” Opt. Express 21(5), 5974–5987 (2013). [CrossRef]  

13. S. S. Martínez, J. G. Ortega, J. G. García, and A. S. García, “A machine vision system for defect characterization on transparent parts with non-plane surfaces,” Mach. Vision Appl. 23(1), 1–13 (2012). [CrossRef]  

14. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

15. Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting Gray-code light,” Opt. Express 27(16), 22631–22644 (2019). [CrossRef]  

16. L. Su, X. Su, W. Li, and L. Xiang, “Application of modulation measurement profilometry to objects with surface holes,” Appl. Opt. 38(7), 1153–1158 (1999). [CrossRef]  

17. M. Lu, X. Su, Y. Cao, Z. You, and M. Zhong, “Modulation measuring profilometry with cross grating projection and single shot for dynamic 3D shape measurement,” Opt. Laser Eng. 87, 103–110 (2016). [CrossRef]  

18. J. Huang, W. Chen, and X. Su, “Application of two-dimensional wavelet transform in the modulation measurement profilometry,” Opt. Eng. 56(3), 034105 (2017). [CrossRef]  

19. L. Huang, M. Idir, C. Zuo, and A. Asundi, “Review of phase measuring deflectometry,” Opt. Laser Eng. 107, 247–257 (2018). [CrossRef]  

20. Z. Zhang, Y. Wang, S. Huang, Y. Liu, C. Chang, F. Gao, and X. Jiang, “Three-Dimensional Shape Measurements of Specular Objects Using Phase-Measuring Deflectometry,” Sensors 17(12), 2835 (2017). [CrossRef]  

21. L. Huang, J. Xue, B. Gao, C. McPherson, J. Beverage, and M. Idir, “Modal phase measuring deflectometry,” Opt. Express 24(21), 24649–24664 (2016). [CrossRef]  

22. P. Zhao, N. Gao, Z. Zhang, F. Gao, and X. Jiang, “Performance analysis and evaluation of direct phase measuring deflectometry,” Opt. Laser Eng. 103, 24–33 (2018). [CrossRef]  

23. Y. Song, H. Yue, Y. Huang, Y. Fang, and Y. Liu, University of Electronic Science and Technology of China are preparing a manuscript to be called “A Novel Defect Detection Method with Eliminating Dusts for Specular Surface Based on Polarized Structured-light Illumination”.

24. F. Zhang, C. Li, and B. Meng, “Investigation of Surface Deformation Characteristic and Removal Mechanism for K9 Glass Based on Varied Cutting-depth Nano-scratch,” Chin. J. Mech. Eng. 52(17), 065–71 (2016). (in Chinese) [CrossRef]  

25. J. E. Greivenkamp, Field Guide to Geometrical Optics (SPIE, 2004).

26. L. Huang and A. Krishna, “Phase retrieval from reflective fringe patterns of double-sided transparent objects,” Meas. Sci. Technol. 23(8), 085201 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The structure of the reflection system.
Fig. 2.
Fig. 2. The light reflection relation (a) on the clean and intact specular surface, (b) when there is contamination or defect.
Fig. 3.
Fig. 3. The relation between incident light rays and reflected light rays (a) in $\gamma$ direction, (b) in k direction. Green lines are the incident light rays when the surface is clean and intact, and red lines indicate the incident light rays when there is contamination or defect. (c) The situation of the incident light.
Fig. 4.
Fig. 4. The structure of the transmission system.
Fig. 5.
Fig. 5. The light transmission relation (a) of the clean and intact smooth transparent object, (b) when there is contamination or defect.
Fig. 6.
Fig. 6. (a) The relation between incident light rays and transmitted light rays. Green line is the incident light ray when the object is clean and intact, and red line represents the incident light ray when there is contamination or defect. (b) The situation of the incident light.
Fig. 7.
Fig. 7. The increase of the incident light source region (a) with structured light illumination, (b) with uniform planar light illumination. The deviation of the incident light source region (c) with structured light illumination, (d) with uniform planar light illumination. The orange boxes represent the incident light source region when the object is clean and intact, and the red boxes indicate the incident light source region when there is contamination or defect.
Fig. 8.
Fig. 8. In the reflection system, (a) shows the relative modulation, (b) shows the relative luminous flux. A single line indicates the tendency of modulation and luminous flux with the increase of R, and the different lines represent modulation and luminous flux at different v.
Fig. 9.
Fig. 9. In the reflection system, (a) the modulation changes with the deviation of the incident light source region, (501,501) is the coordinates of the original planar light source, (b) the region where the modulation value is higher than original value. In the transmission system, (c) the modulation change with the deviation of the incident light source region, (501,501) is the coordinates of the original planar light source.
Fig. 10.
Fig. 10. In the reflection system, (a) the modulation result, (b) the luminous flux result. In the transmission system, (c) the modulation result, (d) the luminous flux result. The comparison of modulation and luminous flux (e) in the reflection system, (f) in the transmission system.

Equations (35)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ φ ( x , y ) + δ n ]
I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ φ ( x , y ) + φ ( x , y ) + δ n ]
M ( x , y ) = ( n = 0 N 1 I n ( x , y ) cos δ n ) 2 + ( n = 0 N 1 I n ( x , y ) sin δ n ) 2
M ( x , y ) = N 2 B ( x , y )
E = L d A s cos θ 1 cos θ 2 r 2
θ 1 ( R ) = arctan ( g c e l l ) 2 + ( k c e l l ) 2 L 0
z = x cot [ ζ arctan ( g c e l l L 0 ) ] = y L 0 cos ζ + g c e l l sin ζ k c e l l
z = x cot ζ
θ 2 ( R ) = 1 2 arccos ( d 1 ( R ) + cot ζ [ ( d 1 ( R ) ) 2 + ( d 2 ( R ) ) 2 + 1 ] ( 1 + cot 2 ζ ) )
d 1 ( R ) = 1 cot [ ζ arctan ( g c e l l L 0 ) ]
d 2 ( R ) = k c e l l L 0 cos ζ + g c e l l sin ζ
r ( R ) = ( g c e l l ) 2 + ( k c e l l ) 2 + L 0 2
L n ( R ) = A ( x , y ) + B ( x , y ) cos [ φ ( x , y ) + δ n + γ ]
E n ( R ) ( x , y ) = L n ( R ) A s cos θ 1 ( R ) cos θ 2 ( R ) ( r ( R ) ) 2
Φ n R ( R ) ( x , y ) = α s ( v σ k = σ k 0 σ k γ = σ γ 0 σ γ L n ( R ) A s cos θ 1 ( R ) cos θ 2 ( R ) ( r ( R ) ) 2 + ( 1 v ) C ( x , y ) )
Φ n 0 ( R ) ( x , y ) = α 0 s 0 A s cos ζ L 0 2 { A ( x , y )  +  B ( x , y ) cos [ φ ( x , y ) + δ n ] }
M ( R ) ( x , y ) = α s v A s N B ( x , y ) 2 σ ( k = σ k 0 σ k γ = σ γ 0 σ γ cos θ 1 ( R ) cos θ 2 ( R ) ( r ( R ) ) 2 cos [ φ ( x , y ) + γ ] ) 2 + ( k = σ k 0 σ k γ = σ γ 0 σ γ cos θ 1 ( R ) cos θ 2 ( R ) ( r ( R ) ) 2 sin [ φ ( x , y ) + γ ] ) 2
M 0 ( R ) ( x , y ) = α 0 s 0 A s N B ( x , y ) cos ζ 2 L 0 2
sin θ 2 ( T ) sin c = n
sin ( c + θ 1 ( T ) ) sin c = n
θ 1 ( T ) = arctan ( g c e l l ) 2 + ( k c e l l ) 2 L 0
c ( T ) = arccot ( n sin θ 1 ( T ) cot θ 1 ( T ) )
r ( T ) = ( g c e l l ) 2 + ( k c e l l ) 2 + L 0 2
θ 2 ( T ) = c ( T ) + θ 1 ( T )
L n ( T ) = A ( x , y ) + B ( x , y ) cos [ φ ( x , y ) + δ n + γ ]
E n ( T ) ( x , y ) = L n ( T ) A s cos θ 1 ( T ) cos θ 2 ( T ) ( r ( T ) ) 2
Φ n T ( T ) ( x , y ) = β s ( v σ k = σ k 0 σ k γ = σ γ 0 σ γ L n ( T ) A s cos θ 1 ( T ) cos θ 2 ( T ) ( r ( T ) ) 2 + ( 1 v ) C ( x , y ) )
Φ n 0 ( T ) ( x , y ) = β 0 s 0 A s L 0 2 { A ( x , y )  +  B ( x , y ) cos [ φ ( x , y ) + δ n ] }
M ( T ) ( x , y ) = β s v A s N B ( x , y ) 2 σ ( k = σ k 0 σ k γ = σ γ 0 σ γ cos θ 1 ( T ) cos θ 2 ( T ) ( r ( T ) ) 2 cos [ φ ( x , y ) + γ ] ) 2 + ( k = σ k 0 σ k γ = σ γ 0 σ γ cos θ 1 ( T ) cos θ 2 ( T ) ( r ( T ) ) 2 sin [ φ ( x , y ) + γ ] ) 2
M 0 ( T ) ( x , y ) = β 0 s 0 A s N B ( x , y ) 2 L 0 2
L ( W ) ( x , y ) = A ( x , y ) + B ( x , y )
Φ ( R W ) ( x , y ) = α s ( v σ k = σ k 0 σ k γ = σ γ 0 σ γ L ( W ) A s cos θ 1 ( R ) cos θ 2 ( R ) ( r ( R ) ) 2 + ( 1 v ) C ( x , y ) )
Φ 0 ( R W ) ( x , y ) = α 0 s 0 A s cos ζ L 0 2 [ A ( x , y ) + B ( x , y ) ]
Φ ( T W ) ( x , y ) = β s ( v σ k = σ k 0 σ k γ = σ γ 0 σ γ L ( W ) A s cos θ 1 ( T ) cos θ 2 ( T ) ( r ( T ) ) 2 + ( 1 v ) C ( x , y ) )
Φ 0 ( T W ) ( x , y ) = β 0 s 0 A s L 0 2 [ A ( x , y )  +  B ( x , y ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.