Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial phase-shifting profilometry by use of polarization for measuring 3D shapes of metal objects

Open Access Open Access

Abstract

In this paper, we present a polarization spatial phase-shifting method for fringe projection profilometry. It enables us to measure the three-dimensional shape of a metal object in a fast way requiring only a single-shot implementation. With this method, a couple of projectors are equipped, in front of their lens, with linear polarization filters having orthogonal polarization directions, so that they can simultaneously cast two sinusoidal fringe patterns having different phase shifts onto the measured metal surfaces without mixture. To register the two projected patterns, we suggest a fringe alignment method based on the epipolar geometry between the projectors. By taking advantage of the property of metal surfaces in maintaining polarization state of incident light, the deformed fringe patterns on the measured surfaces are captured by using two coaxially-arranged polarization cameras. As a result, the fringe phases are calculated by using a two-step phase-shifting algorithm and further the 3D shapes of the measured surfaces are reconstructed. Experimental results demonstrate the proposed method to be valid and efficient in measuring metal objects.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Metal materials, by taking advantage of high strength, high toughness, and prominent machinability, are widely used in modern industries, especially in manufacturing industries. Because of this fact, it is important to measure the three-dimensional (3D) profiles of metal parts or metal structures, so that we can evaluate their geometries, deformations, or vibrations from the measurement results just obtained. For performing this task, there have been many techniques developed, which are roughly classified into two categories including contacting and the non-contacting ones.

Currently, the contacting methods, for example those with coordinate measuring machines (CMMs) or with coordinate measuring arms, remain the most popularly used for measuring metal products in industries [1]. These methods measure the coordinates of a single point at a time by contacting the measured surface using a stylus, thus requiring a time-consuming scanning procedure to get a point cloud characterizing the whole surface.

As alternative solutions, optical techniques allow one to measure metal objects in a non-contacting way. For example, optical interferometry [2] is suitable for measuring the highly smooth metal surfaces having flat, spherical, or cylindrical profiles. Combining with multiple-aperture stitching technique [35] allows interferometry to overcome its limitation in measuring large-scaled objects. When measuring a specular surface having a free-form profile, however, fringe deflectometry [69] has shown a much greater feasibility over interferometry at the expense of measurement resolution. We note that most metal parts in industries do not have specular surfaces, but have a certain roughness thus reflecting light diffusely. Triangulation-based techniques, such as laser scanning [10], moiré [11,12], and fringe projection [1316], are usually adopted to cope with this situation. Among them, fringe projection technique is more efficient because of its full-field advantage.

With fringe projection technique, a projector casts one or more sinusoidal fringe patterns onto measured object surfaces, and then a camera at a different angle records the deformed fringes caused by depth variations of the surfaces. Analyzing the deformed patterns yields their phase map, from which we reconstruct the object depths. In this procedure, the used phase retrieving algorithm has been recognized as one of the most crucial factors affecting measurement resolution and efficiency. Generally, spatial fringe analysis methods, including Fourier transform method [13,17], various convolution methods [18,19], and other nonlinear methods [20,21] have a high measurement efficiency because they require capturing only one fringe pattern, but simultaneously these methods suffer from a relatively low measurement resolution especially when measuring a surface having edges and discontinuities. Different from them, a temporal phase-shifting technique [22] involves a pointwise operation and hence achieves much higher resolutions. On the other hand, its dependence on multiple fringe patterns decreases its efficiency. Besides these two types of typical methods, spatial phase-shifting technique enables recording a sequence of phase-shifting fringe patterns simultaneously, thus well balancing measurement efficiency and resolution.

Spatial phase-shifting technique works with the aid of specially designed system configuration, and was originally used in interferometry and projection moiré. With spatial phase-shifting interferometry, the simultaneous phase-shifting fringe patterns are generated by means of polarization optics [23] or diffraction gratings [24]. Projection moiré technique uses color grating to achieve the same purpose [25]. Because of the invention of digital color projector later, spatial phase-shifting technique has been applied to fringe projection technique for measuring diffused objects [26,27]. The color projector parallelly casts, through three chromatic channels, fringe patterns having different phase shifts onto the measured surfaces, and a color camera simultaneously grabs three deformed patterns. With this method, the measurement results are sensitive to the colors of the object, and this limitation also exists when measuring a metal object. Developing a spatial phase-shifting method suitable for measuring metal objects will be helpful for solving this problem.

This paper suggests a polarization spatial phase-shifting method for fringe projection profilometry. In its system, a couple of projectors are equipped, in front of their lens, with linear polarization filters having orthogonal polarization directions. These two projectors are aligned to each other by using their epipolar lines. As a result, these projectors can parallelly cast two sinusoidal fringe patterns having different phase shifts onto the measured metal surfaces without mixture. By taking advantage of the property of metal surfaces in maintaining polarization state of incident light, deformed fringe patterns on the measured surfaces are captured simultaneously by using two coaxially arranged polarization cameras. A two-step phase-shifting algorithm is employed to retrieve the fringe phases and, further, the 3D shapes of the measured surfaces are reconstructed from these phases. Experimental results demonstrated that this proposed method enables us to measure a metal object in an efficient way requiring only a single-shot implementation with a satisfiedly high measurement accuracy.

2. System and principle

In this section, we shall suggest a spatial phase-shifting method by means of polarization. It is commonly known that when polarized light is reflected from a Lambertian surface, the reflected light is unpolarized. Therefore, the polarization-based spatial phase-shifting method is generally not used to measure a defused object. We note that metal surfaces have a property of maintaining polarization state of incident light. This fact implies a possibility of using polarization light to project simultaneous fringe patterns having different phase shifts in measuring a metal object. In the existing techniques of metal object measurement, polarization light has been applied to overcoming the oversaturation issue. In this work, we use polarization for a very different purpose. It is to provide isolated channels for fringe patterns having different phase shifts. This method allows us to measure a metal object in an efficient way involving only a single-shot implementation.

This polarization spatial phase-shifting profilometry works with a measurement system schemed in Fig. 1(a). Different from the system used in traditional technique of fringe projection, this system consists of two projectors and two cameras. In front of each projector lens, we insert a polarizer (i.e., a linear polarization filter), and these two polarizers have orthogonal polarization directions. As a result, they can parallelly cast two sinusoidal fringe patterns having different phase shifts onto the measured metal surfaces without mixture. An issue arising out of this system is that the two projectors are difficult to adjust to construct an exactly coaxial system, because the projectors, unlike cameras, cannot take images as feedback for adjusting their positions. For solving this problem, we suggest using fringes along epipolar lines between the two projectors, so that these two projectors can cast fringes like a single projector, though they do not have a coincident optical axis. We shall discuss this issue in the next section.

 figure: Fig. 1.

Fig. 1. (a) the measurement system; (b) the epipolar geometry of the two projectors.

Download Full Size | PDF

As mentioned above, the two projectors work like a coaxial projection system, so we can simply denote their sinusoidal fringe patterns to be projected using a unified formular as

$${g_k}(m,n) = \textrm{ }a + b\cos (2\pi m/p + {\delta _k}), k = 1,2$$
where the subscript $\; k$ indexes the projectors, $({m,n} )$ are coordinates of a projector point, and a and b are bias and amplitude of the sinusoidal fringes, respectively. p is the fringe pitch, here we assume for the moment that the fringes have a direction perpendicular to $m$-axis. ${\delta _k}$ denotes phase shifts. These two fringe patterns, projected through the polarizers, have orthogonal polarization directions.

Since this proposed method provides a channel for each phase-shifting pattern by use of polarization, the reflection property of metal surface affects its results. For simplifying the discussion, we consider the metal surface to be a smooth one first. When linearly polarized light is reflected from this smooth metal surface, generally the reflected light, depending on the incidence angle and the complex refractive index of the metal, becomes elliptically polarized [28]. Denoting directions of the major and minor axes of ellipse as $\alpha $ and $\beta $, respectively, the intensity reflected by the metal surface is a combination of two components in $\; \alpha $- and $\beta $- directions, namely,

$${R_k}(x,y,z) = {r_{\alpha k}}{g_{\alpha k}}(x,y,z) + {r_{\beta k}}{g_{\beta k}}(x,y,z),\begin{array}{cc} {}&{k = 1,2} \end{array}$$
where $({x,y,z} )$ are coordinates of a point on the metal surface illuminated by the light source point $({m,n} )$. ${r_{\alpha k}}$ and ${r_{\beta k}}$ are scale factors related to the reflectivity, and these factors may have different values in $\alpha $- and $\beta $-directions. Both the terms in Eq. (2) contain fringe signals.

According to the metal reflection model, elliptization of the polarization state of the reflected light strongly depends on the incidence angle. Fringe projection technique usually involves relatively small incidence angles, in which case the intensity component in $\beta $-direction is far smaller than that in $\alpha $-direction, and in other words, the second term in Eq. (2) can be neglected. This fact confirms the principle that, when measuring metal objects, the linearly polarized light can be used to form separated channels for fringe patterns having different phase shifts.

Returning to Fig. 1(a), we use two cameras to capture the fringe patterns reflected from the metal surface. By means of a beam splitter, we adjust the cameras, making them have a coincident optical axis and pixel positions. In front of each camera lens, we insert an analyzer (i.e., a linear polarization filter). These two analyzers have orthogonal polarization directions. Neglecting the component along the minor axis of elliptically polarized light, intensities recorded by the two cameras are represented by

$$\left\{ {\begin{array}{{c}} {{I_1}(u,v) = {r_{\alpha 1}}{g_{\alpha 1}}(x,y,z){{\cos }^2}{\theta_1} + {r_{\alpha 2}}{g_{\alpha 2}}(x,y,z){{\sin }^2}{\theta_2}}\\ {{I_2}(u,v) = {r_{\alpha 1}}{g_{\alpha 1}}(x,y,z){{\sin }^2}{\theta_1} + {r_{\alpha 2}}{g_{\alpha 2}}(x,y,z){{\cos }^2}{\theta_2}} \end{array}} \right.$$
where $({u,v} )$ are coordinates of a camera pixel, at which the object point $({x,y,z} )$ produces its image. ${\theta _1}$ and ${\theta _2}$ denote the angles of polarization directions of the two analyzers deviating from the major axes of the corresponding elliptically polarized light. Note that, the two projected patterns in Eq. (1) have orthogonal polarization directions. The major axes of their reflected light are approximately orthogonal in the situation of small incidence angles, implying that ${\theta _1}$ and ${\theta _2}$ have almost equal values.

In fact, the patterns represented by Eq. (3) are two sinusoidal fringe patterns having different phase shifts. For computational convenience, we can adjust the analyzers, making their polarization directions coincident with the major axes of the elliptically polarized light. In this case, ${\theta _1}$ and ${\theta _2}$ are approximately equal to 0, and the two patterns in Eq. (3) become

$$\left\{ {\begin{array}{{c}} {{I_1}(u,v) = {r_{\alpha 1}}{g_{\alpha 1}}(x,y,z) = {A_1}(u,v) + {B_1}(u,v)\cos [\Phi (u,v) + {\delta_1}]}\\ {{I_2}(u,v) = {r_{\alpha 2}}{g_{\alpha 2}}(x,y,z) = {A_2}(u,v) + {B_2}(u,v)\cos [\Phi (u,v) + {\delta_2}]} \end{array}} \right.$$
where ${A_k}({u,v} )$ and ${B_k}({u,v} )$ with $k = 1,\; 2$ are background intensities and modulations at $({u,v} )$, respectively; $\mathrm{\Phi }({u,v} )$ denotes the phase caused by the depth variation of the measured surface. In measurement practice, the measured metal surface is usually not smooth but has roughness thus reflecting light diffusely. This phenomenon may induce a certain degree of depolarization of reflected light, but the polarization cameras can still detect their corresponding fringe signals form the separate channels.

When the deformed fringe patterns are captured, a two-step phase-shifting algorithm is used to retrieve the fringe phase map. We shall introduce its procedure in Section 3. 2. Furthermore, the depth map of the object surface is reconstructed from the phase map, just as we did in the standard fringe projection technique.

3. Implementations

3.1 Projector alignment

We mentioned in the previous section that the two projectors in Fig. 1(a) are difficult to adjust to construct an exactly coaxial system, and using fringes along epipolar lines between the two projectors enables us to overcome this problem.

To gain inside into its principle, we analyze the epipolar geometry of the two projectors shown in Fig. 1(b). The points ${O_1}$ and ${O_2}$ denote the centers of lenses of the two projectors, respectively, and the line ${O_1}{O_2}$ is their baseline. The planes ${\mathrm{\Pi }_1}$ and ${\mathrm{\Pi }_2}$ denote the image planes of the two projectors, respectively. The baseline ${O_1}{O_2}$ crosses ${\mathrm{\Pi }_1}$ and ${\mathrm{\Pi }_2}$ at epipoles, i.e., ${e_1}$ and ${e_2}$, respectively. Assuming that Q is a point in the object field, the three points, Q, ${O_1}$ and ${O_2}$ form a plane called the epipolar plane. Its intersecting lines with ${\mathrm{\Pi }_1}$ and ${\mathrm{\Pi }_2}$, i.e., ${l_1}$ and ${l_2}$, are the epipolar lines on ${\mathrm{\Pi }_1}$ and ${\mathrm{\Pi }_2}$, respectively.

Assume that, on the plane ${\mathrm{\Pi }_1},\; $ there is a fringe having a direction along ${l_1}$. When this fringe is projected onto an object surface, the deformed fringe on this surface must be along the intersecting line this surface crosses the epipolar plane. Similarly, a fringe along ${l_2}$ on ${\mathrm{\Pi }_2}$ will bring a deformed fringe along the same intersecting line. This phenomenon means that the two fringes, though they are projected by different projectors, are always locate within their epipolar plane thus being aligned exactly in the object space. From this fact, we know that the two projectors, having no common optical axis, can cast fringes like a single projector as long as the fringes are designed to have directions along their epipolar lines.

Assuming that $({{m_1},{n_1}} )$ and $({{m_2},{n_2}} )$ denote pixel coordinates of points on ${l_1}$ and ${l_2}$, respectively, these two sets of points must satisfy the following epipolar constrain equations [29].

$$\left[ {\begin{array}{ccc} {{m_1}}&{{n_1}}&1 \end{array}} \right]\left[ {\begin{array}{ccc} {{f_{11}}}&{{f_{12}}}&{{f_{13}}}\\ {{f_{21}}}&{{f_{22}}}&{{f_{23}}}\\ {{f_{31}}}&{{f_{32}}}&{{f_{33}}} \end{array}} \right]\left[ {\begin{array}{{c}} {{m_2}}\\ {{n_2}}\\ 1 \end{array}} \right] = 0$$
where the 3 by 3 matrix connecting the two sets of points is the fundamental matrix of the two-projector system. In the field of machine vision, it has been a well solved problem to estimate the fundamental matrix from pairs of corresponding points of two images [29]. With the proposed technique, however, the difficulty is that the projectors cannot capture images, so we have to match the points of the two projectors by the aid of a camera and a reference board. The procedure is as follows. First, use the first projector to project both vertical (perpendicular to $m$-axis) and horizontal (perpendicular to $n$-axis) sinusoidal fringe patterns onto the reference board, capture the deformed fringes using the camera, recover their phases as ${\mathrm{\Phi }_{\textrm{V}1}}({u,v} )$ and ${\mathrm{\Phi }_{\textrm{H}1}}({u,v} )$, respectively, and then convert the phases into projector pixel coordinates, i.e., $[{m_1}({u,v} ),\; \; {n_1}({u,v} )\left] = \right[{\mathrm{\Phi }_{\textrm{V}1}}({u,v} ),\; \; {\mathrm{\Phi }_{\textrm{H}1}}({u,v} )]p/2\pi $, with p being fringe pitch in pixels. Second, use the second projector to do the same thing, recovering the phases ${\mathrm{\Phi }_{\textrm{V}2}}({u,v} )$ and ${\mathrm{\Phi }_{\textrm{H}2}}({u,v} )$ and further the projector pixel coordinates $[{m_2}({u,v} ),\; \; {n_2}({u,v} )\left] = \right[{\mathrm{\Phi }_{\textrm{V}2}}({u,v} ),\; \; {\mathrm{\Phi }_{\textrm{H}2}}({u,v} )]p/2\pi $. Third, move the reference board to several different depth positions, and repeat implementing the first and second steps at each position. Fourth, substitute all the corresponding pixel pairs, $[{m_1}({u,v} ),\; \; {n_1}({u,v} )]$ and $[{m_2}({u,v} ),\; \; {n_2}({u,v} )]$, into Eq. (5), and estimate the elements of the fundamental matrix in the least squares sense through singular value decomposition. When the fundamental matrix is determined, the coordinates of epipoles of the two projectors, i.e., $({m_{e1}},{n_{e1}})$ on ${\mathrm{\Pi }_1}$ and $({m_{e2}},{n_{e2}})$ on ${\mathrm{\Pi }_2}$, are calculated, respectively, by solving the following two systems of equations, viz.
$$\left[ {\begin{array}{ccc} {{m_{e1}}}&{{n_{e1}}}&1 \end{array}} \right]\left[ {\begin{array}{ccc} {{f_{11}}}&{{f_{12}}}&{{f_{13}}}\\ {{f_{21}}}&{{f_{22}}}&{{f_{23}}}\\ {{f_{31}}}&{{f_{32}}}&{{f_{33}}} \end{array}} \right] = 0$$
and
$$\left[ {\begin{array}{ccc} {{f_{11}}}&{{f_{12}}}&{{f_{13}}}\\ {{f_{21}}}&{{f_{22}}}&{{f_{23}}}\\ {{f_{31}}}&{{f_{32}}}&{{f_{33}}} \end{array}} \right]\left[ {\begin{array}{{c}} {{m_{e2}}}\\ {{n_{e2}}}\\ 1 \end{array}} \right] = 0$$
When the fundamental matrix and the coordinates of epipoles are determined, it is easy to generate fringes along epipolar lines. In this work, we have to generate, for the two projectors, respectively, two fringe patterns having a relative phase shift between them. First, we generate a fringe pattern for the first projector simply by making each fringe along a straight line passing through its epipole $({m_{e1}},{n_{e1}})$. Second, we generate another fringe pattern for the first projector, which has a relative phase shift from the first one. Third, we generate a fringe pattern for the second projector just by making its each pixel $({m_2},{n_2})\; $ has the same gray levels with the pixel $({m_1},{n_1})$ on the second fringe pattern of the first projector, where $({m_1},{n_1})$ is one of the corresponding pixels of $({m_2},{n_2})$ satisfying the epipolar constrain in Eq. (5). In this procedure, a bi-linear interpolation is used to deal with the pixel having fractional coordinates. By doing so, the fringes projected onto an object by the two projectors are aligned, just like ones casted from a coaxial system.

Note that, with this method, the fringes have a direction approximately parallel to the baseline of the two projectors. For achieving high phase-sensitivities [30], the cameras should be placed on one side in the direction perpendicular to this baseline as shown in Fig. 1(a).

3.2 Phase measuring

Using the system in Fig. 1(a), we can simultaneously capture two deformed fringe patterns represented by Eq. (4) through two polarization channels. Note that these two patterns may have different background intensities and modulations because of the reflection properties of metal objects. We correct these differences by using the histogram-based method [31], so that we have

$$\left\{ {\begin{array}{{c}} {{I_1}(u,v) = A(u,v) + B(u,v)\cos [\Phi (u,v) + {\delta_1}]}\\ {{I_2}(u,v) = A(u,v) + B(u,v)\cos [\Phi (u,v) + {\delta_2}]} \end{array}} \right.$$

This equation system contains three knowns, $A({u,v} )$, $B({u,v} )$, and $\mathrm{\Phi }({u,v} )$ for each pixel $({u,v} )$, so it is underdetermined. For this reason, two-step phase-shifting algorithms usually involve temporal-spatial rather than purely temporal operation. They are based on a condition that, in comparison with the fringe fluctuations, $A({u,v} )$, $B({u,v} )$, and $\mathrm{\Phi }({u,v} )$ have much slower variations across the image. Following this idea, we assume for the moment that the fringe intensities fluctuate mainly along $u$-direction, so we have $A({u,v} )\approx \; A({u + 1,v} )$, $B({u,v} )\approx \; B({u + 1,v} )$, and $\partial \phi ({u,v} )/\partial u \approx {\omega _u}$ with ${\omega _u}\; $ being the carrier frequency along $u$-direction. Using the intensities at the two neighboring pixels $({u,v} )$ and $({u + 1,v} )$, we get a system of four linear equations, viz.

$$\left[ {\begin{array}{ccc} 1&{\cos ({\delta_1})}&{ - \sin ({\delta_1})}\\ 1&{\cos ({\delta_1} + {\omega_u})}&{ - \sin ({\delta_1} + {\omega_u})}\\ 1&{\cos ({\delta_2})}&{ - \sin ({\delta_2})}\\ 1&{\cos ({\delta_2} + {\omega_u})}&{ - \sin ({\delta_2} + {\omega_u})} \end{array}} \right]\left[ {\begin{array}{{l}} {A(u,v)}\\ {{C_1}(u,v)}\\ {{C_2}(u,v)} \end{array}} \right] = \left[ {\begin{array}{{l}} {{I_1}(u,v)}\\ {{I_1}(u + 1,v)}\\ {{I_2}(u,v)}\\ {{I_2}(u + 1,v)} \end{array}} \right]$$
where ${C_1}({u,v} )= B({u,v} )\textrm{cos}\Phi ({u,v} )$ and ${C_2}({u,v} )= B({u,v} )\textrm{sin}\Phi ({u,v} )$. The carrier frequency ${\omega _u}$, which equals $2\pi $ times the reciprocal of the averaging fringe pitch along $u$-direction, can be estimated from the fringe patterns themselves. Solving the system of equations in Eq. (9) in the least-squares sense for the unknowns, $A({u,v} )$, ${C_1}({u,v} )$, and ${C_2}({u,v} )$, we have the phases
$$\phi (u,v) = \arctan \frac{{{C_2}(u,v)}}{{{C_1}(u,v)}}$$

This equation yields a phase map wrapped within a range from $- \pi $ to $\pi $ radians, i.e., within the principal range of the four-quadrant arctangent function. To distinguish them from the unwrapped phases $\mathrm{\Phi }$, we denote them using a low-case letter $\phi $ in Eq. (10). Note that, to avoid the system of equations in Eq. (9) becoming ill-conditioned, the relative phase step between the two fringe patterns, i.e., ${\delta _2} - {\delta _1}$, cannot be a multiple of $\pi $ radians.

If the fringe intensities fluctuate mainly along $v$-direction, the phases are calculated in a similar way by replacing the neighboring pixel $({u + 1,v} )$ with $({u,v + 1} )$, and the carrier frequency ${\omega _u}$ with ${\omega _v}$, respectively. Unwrapping the wrapped phase map $\phi ({u,v} )$ calculated by using Eq. (10), we obtain $\mathrm{\Phi }({u,v} )$. From $\mathrm{\Phi }({u,v} )$, the depth map will be reconstructed in the next subsection.

3.3 Depth map reconstruction

Fringe projection profilometry requires a system calibration to determine the relations between the phase map and 3D point coordinates. Its general approach is to explicitly calibrate all the parameters of the cameras and the projectors, including their intrinsic and extrinsic parameters [3234]. With the proposed technique, however, the measurement system is somewhat complex which consists of two projectors not having coincident optical axes. It is more convenient to calibrate it using a reference-plane-based method [35,36] instead, which gives a mapping function between the phases and object depths as

$$h(u,v) = \frac{{E(u,v)[\Phi (u,v) - {\Phi _0}(u,v)]}}{{1 + F(u,v)[\Phi (u,v) - {\Phi _0}(u,v)]}}$$
where ${\mathrm{\Phi }_0}({u,v} )$ denotes the reference phase map, and $E({u,v} )$ and $F({u,v} )$ are coefficients. This function implicitly characterizes the system geometry. Its coefficients are determined by using a reference board. First, position the reference board at the depth $H = 0$, project fringes onto it, and then measure its phase map ${\mathrm{\Phi }_0}({u,v} )$ which serves as the reference phase map. Second, shift the reference board, along the direction perpendicular to this board, to $K \ge 3$ different known depths. At each depth ${H_k}$, measure its phase map ${\mathrm{\Phi }_k}({u,v} )$. Third, for each pixel $({u,v} )$, solve the following equation system for the coefficients $E({u,v} )$ and $F({u,v} )$.
$$\left[ {\begin{array}{cc} {\sum\limits_{k = 1}^K {{{({\Phi _k} - {\Phi _0})}^2}} }&{ - \sum\limits_{k = 1}^K {{{({\Phi _k} - {\Phi _0})}^2}{H_k}} }\\ { - \sum\limits_{k = 1}^K {{{({\Phi _k} - {\Phi _0})}^2}{H_k}} }&{\sum\limits_{k = 1}^K {{{({\Phi _k} - {\Phi _0})}^2}H_k^2} } \end{array}} \right]\left[ {\begin{array}{{c}} E\\ F \end{array}} \right] = \left[ {\begin{array}{{c}} {\sum\limits_{k = 1}^K {({\Phi _k} - {\Phi _0}){H_k}} }\\ { - \sum\limits_{k = 1}^K {({\Phi _k} - {\Phi _0})H_k^2} } \end{array}} \right]$$
where the notation of coordinates, $({u,v} )$, is omitted for simplicity. When $E({u,v} )$ and $F({u,v} )$ are determined, we use Eq. (11) to convert the phases calculated in Section 3.2 into object depths.

4. Experiments

The feasibility of the proposed technique is verified experimentally. The measurement system has been schemed in Fig. 1(a) in Section 2 and photographed in Fig. 2(a) in this section, which mainly contains two AVT G192B cameras and a Liying LDLP-500 projection set consisting of two projectors having orthogonal polarization directions. The two cameras have orthogonal polarization analyzers in front of their lenses and a coincident optical axis by means of a beam splitter. In addition, a plane board is used as reference board. This measurement system has a measurement volume $400 \times 300 \times 80\; \textrm{m}{\textrm{m}^3}$, which is determined by the field ranges and depths of focus of the projectors and the camera.

 figure: Fig. 2.

Fig. 2. (a) The measurement system mainly contains two cameras and a projection set consisting of two projectors having orthogonal polarization directions; (b) The central parts of two generated fringe patterns for the two projectors.

Download Full Size | PDF

After calibrating the fundamental matrix of the two projectors and determining their epipole coordinates, we generate a pair of sinusoidal fringe patterns for them using the method presented in Section 3.1. Each pattern has a size of $1920 \times 1080\; $ pixels. For achieving a high measurement resolution, the fringes on these patterns are designed to have high frequencies. In Fig. 2(b), to show these dense fringes clearly, we just exhibit the central parts of these two patterns with these parts having a size of $500 \times 300$ pixels. Because the two projectors in these experiments have nearly parallel axes, their epipoles are located far from the pattern centers. As a result, the fringes of each projector, which intersect at the very distant epipole, look almost parallel on the generated pattern as we see from Fig. 2(b). The two patterns have subtle differences in fringe directions. When projected onto the object, the fringes will become parallel with one another. Between the two patterns, the phase shift is $\pi /2$ radians. We use these patterns to measure metal objects.

Firstly, we measure a block of aluminum alloy having three steps whose photograph is given in Fig. 3(a). In measurement, we set the period of projected fringes corresponding to an averaging pitch of 3.24 mm on the reference board. According to this fringe density, a $2\pi $-radian phase difference corresponds to an unambiguous variation of height less than 33 mm, depending on the fringe orders. The phase shifts of the two fringe patterns are $0$ and $\pi /2$ radians, respectively. When these two patterns are simultaneously projected onto the measured block through two polarization channels, the two polarization cameras capture two separated patterns as shown in Figs. 3(b) and 3(c), respectively. By employing the two-step phase-shifting algorithm in Section 3.2, we calculate, from these two patterns, the fringe modulations and phases. Figure 3(d) shows the fringe modulations, which can be used for segmenting the valid measurement regions and can also be used as the measure of pixel reliability for directing the phase unwrapping path. Figures 3(e) and 3(f) show the wrapped and unwrapped phase maps, respectively. Using the technique presented in Section 3.3, the depth map and further the point cloud are reconstructed. The results are given in Fig. 4.

 figure: Fig. 3.

Fig. 3. Experimental results of measuring a block of aluminum alloy. (a) is the photograph of the measured object; (b) and (c) show fringe patterns captured by two cameras simultaneously, which have phase shifts 0 and $\pi /2$ radians, respectively; (d) shows the fringe modulations calculated from (b) and (c); (e) is the wrapped phase map in radians calculated using a two-step phase-shifting algorithm; (f) gives the unwrapped phase map in radians.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. 3D measurement results of the block: (a) the depth map reconstructed from the phase map in Fig. 3(f); (b) the point cloud of the measured block.

Download Full Size | PDF

In order to investigate how accurate and how precise this proposed method achieves, we compare its measurement result with those of other methods in Fig. 5 and Table 1. For evaluating measurement precision of a method, the standard approach is to measure a specie using this method repeatedly and then calculate the standard deviation (SD) of the results. In Table 1, we calculate root-mean-squares (RMS) errors from data of one measurement instead, because the measurement errors, as random variables, vary over the object surface in a similar way they vary over time. The cross-section shown in Fig. 5(a) is obtained by using temporal phase-shifting technique [14]. Because a four-step algorithm was used, its results immune to the second order of fringe harmonics. Besides, the temporal technique involves a pointwise operation thus enabling it to protect the object edges from being blurred. It achieves a very high measurement resolution as known. Its RMS error of the recovered surfaces deviating from the nominal size is 0.08 mm as listed in Table 1. Figure 5(b) gives the cross-section along the same position when the block is measured by using a temporal two-step phase-shifting technique [37]. Depending on fewer fringe patterns, the two-step technique is more efficient than the multi-step techniques at the expense of measurement resolution. The two-step technique is sensitive to the fringe harmonics. It is not a purely temporal technique and usually involves a spatial operation to normalize the fringes thus inducing large errors near edges. Table 1 shows that its RMS error is 0.11 mm. Figure 5(c) gives the result of using Fourier transform method [13]. Because this method can recover an object profile from a single fringe pattern, it can be used to measure dynamic objects. Its limitation is its relatively low measurement resolution especially when measuring an object having discontinuities and edges. Its RMS error as listed in Table 1 is 0.27 mm. In Fig. 5, the rightmost panel shows the result of our newly proposed technique. In comparison with temporal methods, a spatial phase-shifting technique involves additional error-inducing factors, such as signal cross-talk, illumination imbalance, and optical axis misalignments between channels. Therefore, it has a decreased precision. In this experiment, the RMS error of the proposed technique is 0.17 mm, somewhat lower than the first two temporal techniques. However, this method is superior in its efficiency requiring only a single-shot implementation. In comparison with Fourier transform method which also requires a single-shot implementation, the proposed technique can achieve a much higher resolution in principle. Although this technique still involves a spatial operation, this operation is confined within a narrow neighborhood having only two pixels, so the object edges are well protected.

 figure: Fig. 5.

Fig. 5. Cross-sections of the depth maps measured by using, from the left to right, the temporal four-step phase-shifting technique, the temporal two-step phase-shifting technique, the Fourier transform method, and the spatial two-step phase-shifting technique newly proposed, respectively.

Download Full Size | PDF

Tables Icon

Table 1. RMS errors (mm) of Depths Measured Using Different Methods

These comparison results can be further explained from another view through spatial bandwidth of fringe patterns [38]. In fringe projection technique, a captured fringe pattern is characterized by its spectrum, in which the background corresponds to a low-frequency component near the origin and on both sides of it the fringes form a pair of conjugate-symmetric lobes. Some factors, such as edges and discontinuities in object shapes, rapidly varying background intensities, and severe curvatures and non-sinusoidal profiles of fringes, will increase the widths of these frequency components, leading to a risk of spectral overlapping. In this situation, temporal multi-step phase-shifting technique is accurate because it is insensitive to the spectral overlapping. The Fourier transform method requires that fringes and background having separated spectra, and therefore it has a decreased measurement resolution especially at object edges. The proposed method, albeit involving a spatial operation, is less sensitive to the spectral overlapping thus having a higher resolution than the Fourier transform method.

Secondly, we measured a part which has a more complex shape than the block measured in Figs. 3 and 4. This measured object shown in Fig. 6(a) is made of cast aluminum and composed of more types of geometrical shapes having various edges and discontinuities. The measurement procedure is the same as that we just used. Figure 6 shows the captured fringe patterns having phase shifts, the fringe modulations, as well as the wrapped and the unwrapped phase maps. From the phase map, we reconstruct the depth map and the 3D point cloud as shown in Fig. 7, demonstrating the proposed method to be valid in measuring complex shapes.

 figure: Fig. 6.

Fig. 6. Experimental results of measuring a complex part. (a) is the photograph of the measured object; (b) and (c) show the captured fringe patterns having phase shifts 0 and $\pi /2$ radians, respectively; (d) shows the fringe modulations; (e) and (f) show the wrapped and the unwrapped phase maps in radians, respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Measurement results of the complex part: (a) the depth map; (b) the point cloud.

Download Full Size | PDF

With this newly proposed technique, there are issues worth discussions. Form Figs. 6(b) and 6(c), we observe that the fringes have dramatically non-uniform brightness and contrasts over the fringe patterns, depending on the local slopes of the surfaces. This phenomenon can also be illustrated by using fringe modulations in Fig. 6(d). This nonuniformity of fringe modulations is induced by the non-Lambertian reflection properties of metal surfaces. Many methods have been developed to deal with this “high dynamic range” issue by using, for example, multiple views [39,40], multiple exposures [41], polarization filters [42], or light field imaging [43,44]. Most of these methods cannot achieve a high efficiency, because they require capturing a number of fringe patterns in order to select from them unsaturated patterns, regions, or pixels having enough high modulations. A different solution is to generate fringe patterns with their gray levels being adjusted locally to adapt to the measured surfaces [45]. This method can be used in our proposed technique to enhance fringe contrast and avoid local saturation simultaneously, but it has a limitation of losing some flexibilities.

Another important issue is regarding phase-unwrapping. Following implementation of the proposed polarization spatial phase-shifting technique, we have to unwrap its acquired fringe phases in order to obtain an absolute phase map. In fact, this polarization spatial phase-shifting technique is independent of the used phase-unwrapping method. In other words, this proposed method can be combined with different phase-unwrapping techniques to achieve the same purpose. In these experiments, we unwrapped the phase maps simply by comparing phase values between consecutive pixels along a path. Although we have optimized the path by using fringe modulation as a measure of pixel reliability, this path-dependent algorithm is sensitive to noise and discontinuities in the wrapped phase map. Especially, a difficulty occurs when the surface has large discontinuities having a phase step greater than $\pi $ radians. In these experiments, we coped with this situation for the moment by using some pre-knowledge about the shapes of the objects. Temporal phase-unwrapping techniques enables us to overcome this difficulty by employing multi-frequency patterns [22] or Gray-coded patterns [46]. These principles are also used to deal with the same issue in high-speed profilometry at the expense of measurement efficiency [47]. Different from them, the spatial methods of adding speckles [48] or specially designed markers [16] to the fringe patterns do not requiring capturing extra patterns thus enabling us to overcome the drawback of temporal methods in our future works.

5. Conclusion

In this paper, we have presented the polarization spatial phase-shifting profilometry for measuring 3D shapes of metal objects. Its principle is based on the property of metal surfaces in maintaining polarization state of incident light. In implementation, this technique uses a couple of projectors with orthogonal polarization filters to project two sinusoidal fringe patterns having different phase shifts onto the measured metal surfaces simultaneously, and uses two polarization cameras to capture the deformed fringe patterns. For overcoming its difficulty in aligning the two projectors, we suggested a strategy that generates fringes along the epipolar lines of the two-projector system, so that the two projectors can work like a single device, though they have no coincident optical axes. Experimental results demonstrated this proposed method to be effective in measuring metal objects. It involves a single-shot implementation, thus being more efficient than temporal phase-shifting techniques. In comparison with the Fourier transform profilometry, this method can achieve a relatively higher accuracy and precision. Because of these advantages, this technique is suitable for measuring metal parts in batch and has a potential in measuring dynamic metal objects.

Funding

National Natural Science Foundation of China (51975345).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. E. Ollison, J. M. Ulmer, and R. McElroy, “Coordinate measurement technology: A comparison of scanning versus touch trigger probe data capture,” Measurement 156(1), 107604 (2020). [CrossRef]  

2. D. Malacara, Optical Shop Testing (John Wiley & Sons, 1998).

3. P. Murphy and J. Fleig, “Subaperture stitching interferometry for testing mild aspheres,” Proc. SPIE 6293, 62930J (2006). [CrossRef]  

4. S. Chen, S. Li, Y. Dai, and Z. Zheng, “Iterative algorithm for subaperture stitching test with spherical interferometers,” J. Opt. Soc. Am. A 23(5), 1219–1226 (2006). [CrossRef]  

5. J. Peng, Y. Yu, D. Chen, H. Guo, J. Zhong, and M. Chen, “Stitching interferometry of full cylinder by use of the first-order approximation of cylindrical coordinate transformation,” Opt. Express 25(4), 3092–3103 (2017). [CrossRef]  

6. M. C. Knauer, J. Kaminski, and G. Hausler, “Phase measuring Deflectometry: a new approach to measuring specular free-form surfaces,” Proc. SPIE 5457, 366–376 (2004). [CrossRef]  

7. L. Huang, M. Idir, C. Zuo, and A. Asundi, “Review of phase measuring deflectometry,” Opt. Lasers Eng. 107, 247–257 (2018). [CrossRef]  

8. H. Guo, P. Feng, and T. Tao, “Specular surface measurement by using least squares light tracking technique,” Opt. Lasers Eng. 48(2), 166–171 (2010). [CrossRef]  

9. X. Liu, Z. Zhang, N. Gao, and Z. Meng, “3D shape measurement of diffused/specular surface by combining fringe projection and direct phase measuring deflectometry,” Opt. Express 28(19), 27561–27574 (2020). [CrossRef]  

10. J. Veitch-Michaelis, Y. Tao, D. Walton, J. P. Muller, B. Crutchley, J. Storey, C. Paterson, and A. Chown, “Crack Detection in “As-Cast” Steel Using Laser Triangulation and Machine Learning,” in Proceedings of IEEE Conference on Computer and Robot Vision (IEEE, 2016), pp. 342–349.

11. T. Shinohara, N. Mascko, and S. Tsujikawa, “Moiré method to measure penetration depth profiles on unevenly corroded metal surfaces,” Corros. Sci. 35(1-4), 785–789 (1993). [CrossRef]  

12. E. J. Sieczka, “Feasibility of moiré contouring for flatness checking of steel plates,” Proc. SPIE 1821, 428–438 (1993). [CrossRef]  

13. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

14. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

15. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

16. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

17. X. Su and W. Chen, “Fourier transform profilometry: A review,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]  

18. M. Kujawinska and J. Wojciak, “Spatial-carrier phase-shifting technique of fringe pattern analysis,” Proc. SPIE 1508, 61–67 (1991). [CrossRef]  

19. S. Tang and Y. Y. Hung, “Fast profilometer for the automatic measurement of 3-D object shapes,” Appl. Opt. 29(20), 3012–3018 (1990). [CrossRef]  

20. R. Zhang and H. Guo, “Phase gradients from intensity gradients: a method of spatial carrier fringe pattern analysis,” Opt. Express 22(19), 22432–22445 (2014). [CrossRef]  

21. H. Guo, Q. Yang, and M. Chen, “Local frequency estimation for the fringe pattern with a spatial carrier: principle and applications,” Appl. Opt. 46(7), 1057–1065 (2007). [CrossRef]  

22. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

23. R. Smythe and R. Moore, “Instantaneous Phase Measuring Interferometry,” Opt. Eng. 23(4), 23436 (1984). [CrossRef]  

24. O. Y. Kwon, “Advanced Wavefront Sensing At Lockheed,” Proc. SPIE 0816, 196–211 (1987). [CrossRef]  

25. K. G. Harding, M. P. Coletta, and C. H. Vandommelen, “Color Encoded Moiré Contouring,” Proc. SPIE 1005, 169–178 (1989). [CrossRef]  

26. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]  

27. P. S. Huang, C. Zhang, and F. P. Chiang, “High-speed 3-D shape measurement based on digital fringe projection,” Opt. Eng. 42(1), 163–168 (2003). [CrossRef]  

28. M. Born and E. Wolf, Principles of Optics (Cambridge University, 1999), Chap. 14.

29. E. R. Davies, Computer and Machine Vision: Theory, Algorithms, Practicalities (Academic, 2012), Chap. 18.

30. R. Zhang, H. Guo, and A. K. Asundi, “Geometric analysis of influence of fringe directions on phase sensitivities in fringe projection profilometry,” Appl. Opt. 55(27), 7675–7687 (2016). [CrossRef]  

31. Y. Lu, R. Zhang, and H. Guo, “Correction of illumination fluctuations in phase-shifting technique by use of fringe histograms,” Appl. Opt. 55(1), 184–197 (2016). [CrossRef]  

32. S. Xing and H. Guo, “Iterative calibration method for measurement system having lens distortions in fringe projection profilometry,” Opt. Express 28(2), 1177–1196 (2020). [CrossRef]  

33. S. Zhang and S. Yau, “Three-dimensional shape measurement using a structured light system with dual cameras,” Opt. Eng. 47(1), 013604 (2008). [CrossRef]  

34. S. Feng, C. Zuo, L. Zhang, T. Tao, Y. Hu, W. Yin, J. Qian, and Q. Chen, “Calibration of fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 143, 106622 (2021). [CrossRef]  

35. H. Zhu, S. Xing, and H. Guo, “Efficient depth recovering method free from projector errors by use of pixel cross-ratio invariance in fringe projection profilometry,” Appl. Opt. 59(4), 1145–1155 (2020). [CrossRef]  

36. H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44(3), 033603 (2005). [CrossRef]  

37. V. H. Flores, A. Reyes-Figueroa, C. Carrillo-Delgado, and M. Rivera, “Two-step phase shifting algorithms: Where are we?” Opt. Laser Tech. 126, 106105 (2020). [CrossRef]  

38. N. T. Shaked, Y. Zhu, M. T. Rinehart, and A. Wax, “Two-step-only phase-shifting interferometry with optimized detector bandwidth for microscopy of live cells,” Opt. Express 17(18), 15585–15591 (2009). [CrossRef]  

39. C. Yu, F. Ji, J. Xue, and Y. Wang, “Adaptive binocular fringe dynamic projection method for high dynamic range measurement,” Sensors 19(18), 4023 (2019). [CrossRef]  

40. S. Feng, Q. Chen, C. Zuo, and A. Asundi, “Fast three-dimensional measurements for dynamic scenes with shiny surfaces,” Opt. Commun. 382, 18–27 (2017). [CrossRef]  

41. S. Zhang and S. T. Yau, “High dynamic range scanning technique,” Proc. SPIE 48(3), 033604 (2009). [CrossRef]  

42. B. Salahieh, Z. Chen, J. J. Rodriguez, and R. Liang, “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064–10071 (2014). [CrossRef]  

43. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3D imaging,” Opt. Express 24(18), 20324–20334 (2016). [CrossRef]  

44. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Universal phase-depth mapping in a structured light field,” Appl. Opt. 57(1), A26–A32 (2018). [CrossRef]  

45. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement,” Opt. Express 22(8), 9887–9901 (2014). [CrossRef]  

46. G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Opt. 36(19), 4463–4472 (1997). [CrossRef]  

47. X. He and K. Qian, “A comparative study on temporal phase unwrapping methods in high-speed fringe projection profilometry,” Opt. Lasers Eng. 142, 106613.

48. W. Yin, J. Zhong, S. Feng, T. Tao, J. Han, L. Huang, Q. Chen, and C. Zuo, “Composite deep learning framework for absolute 3D shape measurement based on single fringe phase retrieval and speckle correlation,” J. Phys. Photonics 2, 045009 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) the measurement system; (b) the epipolar geometry of the two projectors.
Fig. 2.
Fig. 2. (a) The measurement system mainly contains two cameras and a projection set consisting of two projectors having orthogonal polarization directions; (b) The central parts of two generated fringe patterns for the two projectors.
Fig. 3.
Fig. 3. Experimental results of measuring a block of aluminum alloy. (a) is the photograph of the measured object; (b) and (c) show fringe patterns captured by two cameras simultaneously, which have phase shifts 0 and $\pi /2$ radians, respectively; (d) shows the fringe modulations calculated from (b) and (c); (e) is the wrapped phase map in radians calculated using a two-step phase-shifting algorithm; (f) gives the unwrapped phase map in radians.
Fig. 4.
Fig. 4. 3D measurement results of the block: (a) the depth map reconstructed from the phase map in Fig. 3(f); (b) the point cloud of the measured block.
Fig. 5.
Fig. 5. Cross-sections of the depth maps measured by using, from the left to right, the temporal four-step phase-shifting technique, the temporal two-step phase-shifting technique, the Fourier transform method, and the spatial two-step phase-shifting technique newly proposed, respectively.
Fig. 6.
Fig. 6. Experimental results of measuring a complex part. (a) is the photograph of the measured object; (b) and (c) show the captured fringe patterns having phase shifts 0 and $\pi /2$ radians, respectively; (d) shows the fringe modulations; (e) and (f) show the wrapped and the unwrapped phase maps in radians, respectively.
Fig. 7.
Fig. 7. Measurement results of the complex part: (a) the depth map; (b) the point cloud.

Tables (1)

Tables Icon

Table 1. RMS errors (mm) of Depths Measured Using Different Methods

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

g k ( m , n ) =   a + b cos ( 2 π m / p + δ k ) , k = 1 , 2
R k ( x , y , z ) = r α k g α k ( x , y , z ) + r β k g β k ( x , y , z ) , k = 1 , 2
{ I 1 ( u , v ) = r α 1 g α 1 ( x , y , z ) cos 2 θ 1 + r α 2 g α 2 ( x , y , z ) sin 2 θ 2 I 2 ( u , v ) = r α 1 g α 1 ( x , y , z ) sin 2 θ 1 + r α 2 g α 2 ( x , y , z ) cos 2 θ 2
{ I 1 ( u , v ) = r α 1 g α 1 ( x , y , z ) = A 1 ( u , v ) + B 1 ( u , v ) cos [ Φ ( u , v ) + δ 1 ] I 2 ( u , v ) = r α 2 g α 2 ( x , y , z ) = A 2 ( u , v ) + B 2 ( u , v ) cos [ Φ ( u , v ) + δ 2 ]
[ m 1 n 1 1 ] [ f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 ] [ m 2 n 2 1 ] = 0
[ m e 1 n e 1 1 ] [ f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 ] = 0
[ f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 ] [ m e 2 n e 2 1 ] = 0
{ I 1 ( u , v ) = A ( u , v ) + B ( u , v ) cos [ Φ ( u , v ) + δ 1 ] I 2 ( u , v ) = A ( u , v ) + B ( u , v ) cos [ Φ ( u , v ) + δ 2 ]
[ 1 cos ( δ 1 ) sin ( δ 1 ) 1 cos ( δ 1 + ω u ) sin ( δ 1 + ω u ) 1 cos ( δ 2 ) sin ( δ 2 ) 1 cos ( δ 2 + ω u ) sin ( δ 2 + ω u ) ] [ A ( u , v ) C 1 ( u , v ) C 2 ( u , v ) ] = [ I 1 ( u , v ) I 1 ( u + 1 , v ) I 2 ( u , v ) I 2 ( u + 1 , v ) ]
ϕ ( u , v ) = arctan C 2 ( u , v ) C 1 ( u , v )
h ( u , v ) = E ( u , v ) [ Φ ( u , v ) Φ 0 ( u , v ) ] 1 + F ( u , v ) [ Φ ( u , v ) Φ 0 ( u , v ) ]
[ k = 1 K ( Φ k Φ 0 ) 2 k = 1 K ( Φ k Φ 0 ) 2 H k k = 1 K ( Φ k Φ 0 ) 2 H k k = 1 K ( Φ k Φ 0 ) 2 H k 2 ] [ E F ] = [ k = 1 K ( Φ k Φ 0 ) H k k = 1 K ( Φ k Φ 0 ) H k 2 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.