Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Modulation measuring profilometry with auto-synchronous phase shifting and vertical scanning

Open Access Open Access

Abstract

To determine the shape of a complex object with vertical measurement mode and higher accuracy, a novel modulation measuring profilometry realizing auto-synchronous phase shifting and vertical scanning is proposed. Coaxial optical system for projection and observation instead of triangulation system is adopted to avoid shadow and occlusion. In the projecting system, sinusoidal grating is perpendicular to optical axis. For moving the grating along a direction at a certain angle to optical axis, 1D precision translation platform is applied to achieve purposes of both phase-shifting and vertical scanning. A series of fringe patterns with different modulation variations are captured by a CCD camera while scanning. The profile of the tested object can be reconstructed by the relationship between the height values and the modulation distributions. Unlike the previous method based on Fourier transform for 2D fringe pattern, the modulation maps are calculated from the intensity curve formed by the points with definite pixel coordinates in the captured fringe patterns. The paper gives the principle of the proposed method, the set-up of measurement system and the method for system calibration. Computer simulation and experiment results proved its feasibility.

© 2014 Optical Society of America

1. Introduction

Optical 3-D shape measurement based on the structured light illumination with the excellence in non-contact, quick speed and high accuracy has been widely applied in the fields of machine vision, quality control, automated manufacturing and biomedical engineering. Based on the principle of the structured light triangulation, phase-measuring profilometry (PMP) [1,2] and Fourier transform profilometry (FTP) [3],etc, have been developed for 3-D profile measurement. However, these phase measuring methods suffer from the inherent problem of shadow and occlusion because a certain angle between the projection optical axis and the camera imaging optical axis is required.

In order to solve the problems of shadow and occlusion, some vertical measurement techniques have been proposed [4–7], including modulation measurement profilometry with the application of phase shifting technique or Fourier transform technique [4,5], and absolute three-dimensional shape measurement technique using coaxial and coimage plane optical systems and Fourier fringe analysis [6],etc. Unlike the optical triangulation principle, modulation measurement profilometry is a uni-axial measurement, which calculates the modulation values instead of the phases of sinusoidal fringe to reconstruct the surface of the measured object. The modulation measurement method based on Fourier transform technique needs to scan the tested object for M times along the optical axis direction to obtain M frames of fringe patterns [4] for the calculation of modulation maps, in which, Fourier transform and filtering operation are used to calculate the modulation map for each captured image. The introduction of the filtering operation may smooth the area exhibiting considerable variation in height, so error will be large for the measurement of complex object. While modulation measurement method based on the phase shifting technique [5], modulation maps are calculated by point to point algorithm from multi-frame fringes with phase difference at each scanning position. This method has higher measurement precision. However, it requires at least 3 frames of captured fringe patterns at each scanning position. This technique is a go-stop-phase-shift method instead of a continuous measurement method, which needs longer measurement period. Recently, a uni-axial measurement using liquid crystal grating is proposed [8,9]. Even though this method has high measurement accuracy, the optical setup for the three-dimensional shape measurement is complex.

Another new uniaxial 3-D shape measurement by analyzing phase error is introduced into the 3-D shape measurement [10], this method is based on skillful application of the relationship between the phase error caused by improperly defocused binary structured patterns and the depth z. The extraction of depth information is not based on triangulation relationship formed by the optical axis of projector and that of the image device, so this approach also overcomes the problems of shadow and occlusion. However, the technology has lower spatial resolution comparing with the other uniaxial 3-D shape measurement techniques.

The paper proposes a novel modulation measuring profilometry with auto-synchronous phase shifting and vertical scanning. 1D precision translation platform driven by a step motor is used to drive the grating moving at a certain angle to optical axis for achieving purposes of both phase-shift and vertical scanning. During the process of the measurement, a controller is used to continually generate two sets pulses with different time interval. One set pulses drive the step motor system to make sure that the grating image continually scans the object. During the scanning, the grating keeps a fixed phase shifting interval (2pi/N, N> = 3). Another set of pulses trigger CCD camera synchronously to capture fringe patterns with phase differences. The modulation maps are calculated from the intensity curve formed by the points with a definite pixel coordinate of the captured fringe patterns. By applying both the modulation maps and the look-up table of this measurement system, profile of the tested object can be obtained. Computer simulation and experiment verify its effectiveness and feasibility.

2. Principle

The setup of the proposed 3-D surface measurement is shown in Fig. 1. In this system, the grating is perpendicular to the optical axis. 1D translation platform driven by a step motor is used to move the grating at a certain angle α to optical axis for achieving purposes of both phase-shifting and vertical scanning. The optical axis of the CCD camera and that of the projector are coaxial, which avoids the problem of shadow and occlusion based on the triangulation system. It has the capability of measuring an object with considerable variation in height and does not need to calculate the wrapped phases. During the measurement, keep the relative positions of the light source, projection lens, beam splitter, CCD camera and the tested object unchanged; continuously move the grating in Zα direction shown in Fig. 2.

 figure: Fig. 1

Fig. 1 The setup of the proposed 3-D surface measurement.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 The decomposition for the movement of grating.

Download Full Size | PDF

Figure 2 shows positions of the grating during the scanning process. The amount of movement marked by the red arrow in moving direction can be resolved into two orthogonal components. For adjacent positions where the CCD camera is triggered to captured the fringe patterns, the horizontal component is regarded as phase shifting amount 2pi/N (N> = 3) when N-step phase shifting is needed. The vertical component is regarded as the movement amount of grating in vertical direction.

During scanning processing, the image of the grating is projected onto the surface of the tested object, the fringe pattern observed by CCD camera on the image plane is sharp and the modulation of the fringe on this plane will be the greatest, when the location of the grating is continuously changed little by little, the observed pattern is gradually blurred in the front and the back positions of the imaging plane. Simultaneously, the modulation will become smaller because of defocusing.

Figure 3 shows a modulation distribution of eponymous pixels in the captured images during scanning procedure, which reflects the relationship between modulation value and the distance (which will be illustrated in section 4.1 calibration system). On the image plane, the modulation reaches the peak. For all eponymous pixels of the captured images, the similar modulation-distance curves can be obtained. The different distance h corresponding to different focus point for different pixel (x,y) of the fringes, the profile of the tested object can be reconstructed according to the relationship between the modulation values and the distances.

 figure: Fig. 3

Fig. 3 Modulation distribution of eponymous pixels in the captured images.

Download Full Size | PDF

As shown in Fig. 1, during the process of the measurement, a controller is used to continually send two sets pulses with different time interval. One set pulses drive the step motor system to keep the grating image continually scanning the object. During the scanning, when the grating moves a fixed phase shifting interval (2pi/N, N> = 3), the other set pulses trigger CCD camera to capture fringe patterns with phase differences, synchronously. This variable t is also used to mark the serial number of the captured fringes. For example, when the tth CCD camera triggering pulse is generated, the grating arrives to the position t, the phase shift is t*2pi/N. The fringe pattern captured by the CCD camera at that position is also marked as the tth frame fringe. The light intensity distribution of the fringe pattern on the image plane can be expressed as

I(x,y,t)=R(x,y)A2{I0(t)+C0(x,y,t)cos[2πf0x+Φ0(x,y)+2πt/N]}(t=0,1...T1)
I(x,y,t) is the fringe pattern captured by CCD camera when the grating is at the position t. R(x,y) represents the reflectivity of the object. A is the magnification of the measurement system. I0(t)and C0(x,y,t) are respectively the background intensity and projected fringe contrast. f0 is the period of the grating. Φ0(x,y) represents the initial phase. N is the total phase shift numbers for one period corresponding to the N-steps phase shifting algorithm. In this paper, the value N is 4. T is the total pulse number for triggering the CCD camera, which is also regarded as the total number of the captured fringes during scanning. For example, when t = T-1, it means that T frames fringes are captured by CCD camera. In the direction perpendicular to the optical axis, when the grating is moved T steps, T/N fringe periods are also moved. While in optical axis direction, the grating is synchronously moved T steps for carrying out the vertical scanning. In our work, the skillful design of this measurement system makes phase-shift and vertical scanning realized continuously and auto-synchronously.

When the grating arriving at the position marked by the serial number t, a clear image I(x,y,t) can be captured on its image plane. While blurred images can be captured in the front and back of imaging plane with δ distance, and these blurred images can be described as the result of convolving the focused image with the blurring function, that is

Id(x,y,t;δ)=H(x,y)*I(x,y,t)
Where
H(x,y)=12πσH2ex2+y22σH2
Here, the Gaussian model of modulation transfer function (MTF) is used for the sake of simplicity. The spread parameter σH is assumed to be proportional to the radius r of obscuration that σH = Cr. C dependents on the imaging optics and the image sensor, whose value is 2 in most instances [11].

According to Eqs. (2) and (3), the light energy distribution captured by CCD camera at the position in the front and back of imaging plane with δ distance can be expressed as

Id(x,y,t;δ)=R(x,y)A2{I0(t)+C0(x,y,t)e12f02σH2cos[2πf0x+Φ0(x,y)+2πt/N]}
then the function of fringe modulation can be written as
M(x,y,t;δ)=R(x,y)M0(x,y,t)ef022(CδRLl)2
Where RL is the radius of the lens, l is image distance, δ is the defocusing amount. M0(x,y,t) is the maximum modulation value which located on the image plane of the grating. While the value of M(x,y,t;δ) represents the fringe modulation of the observed fringe pattern.

In this paper, modulation maps are calculated from the curve I(t)(x,y) produced by a definite point (x,y) of the captured fringe patterns. Based on Eq. (1), this curve I(t)(x,y) from the definite point (x,y) of the fringes can be simply expressed as

I(t)=RA2{I0(t)+C0(t)e12f02σH2cos[Φ+2πt/N]}(t=0,1...T1)
Where Φ = 2πf0x + Φ0. For the definite point (x,y) on these images, Φ is a constant.

Figure 4(a) shows a series of fringe patterns captured by CCD camera during scanning, and the red pixels with same x, y coordinates on each images produce curve I(t)(x,y), whose intensity distribution is shown in Fig. 4(b) (the blue solid curve).

 figure: Fig. 4

Fig. 4 (a) A series of fringe patterns captured by CCD camera; (b) The intensity distribution of a definite point (x,y) in the captured fringes.

Download Full Size | PDF

Both Eq. (6) and Fig. 4 show that the modulation values of a definite point (x,y) during scanning can be obtained from the curve I(t)(x,y) formed by the definite point (x,y) of all the captured images. Therefore, it can be regarded as a method of point-to-point processing instead of a global processing based on each fringe pattern analysis. That is, in proposed method, each pixel in a fringe does not influenced by its neighbor pixels. Therefore, besides the advantages of avoiding the problems of shadow and occlusion, this uni-axial measurement method has a high accuracy.

The curve I(t)(x,y)) obtained by our method is similar to that obtained by white light interferometer, and it can be accepted as a signal with a new scanning wavelength like Ref. [12], which is a typical white light interferometry adopted by Bruker-Nano. Even though the principle of the method in [12] is different from that in ours, fringe patterns captured by the two methods are similar to each other. Therefore, information processing method [13–17] used in white light interferometry can also be introduced into fringe projection profilometry, such as gravity method, Fourier transform method, Phase-shifting method and etc.

This paper applies the Fourier transform method to calculate the modulation value from the curve I(t)(x,y)), three terms of the spectra can be obtained and expressed as

G(ζ)(x,y)=G0(ζ)(x,y)+G1(ζ)(x,y)+G1(ζ)(x,y)
G0(ζ)(x,y) denotes the zero spectrum of the curve, G1(ζ)(x,y) and G-1(ζ)(x,y) denote the fundamental spectra of the curve. Applying a proper filter to filter out the useful fundamental component and doing inverse Fourier transform, we get
B(t)=12C1(t)ei(Φ)
Where C1(t)=e12f02σH2RC0(t)/A for the point (x,y). According to Eq. (8), modulation distribution C1(t), red dotted curve in Fig. 4(b), can be obtained. Then the modulation maps for the whole images can be calculated by repeatedly doing this operation for every point in the fringe patterns.

3. Simulation

Computer simulation is used to investigate the feasibility of the proposed method for the 3-D shape measurement. The tested object with four discontinuous height steps is shown in Fig. 5. The heights of the four steps from the bottom to the top are respectively 10mm, 30mm, 50mm, and 70mm. The other parameters in this system are chosen as follows: the frequency of projected fringe is 1/4pixel, the focal length of the lens is f0 = 58 mm, the diameter of the lens is 40 mm, 160 frame fringe patterns are captured by CCD camera (T = 160 frame), and the size of the fringe patterns captured by CCD camera is 264 × 264 pixels. In order to match reality more exactly, random noise of 2% fringe intensity is added in the images. All the simulations are performed on MATLAB platform.

 figure: Fig. 5

Fig. 5 Simulated object

Download Full Size | PDF

Figure 6(a) shows the 70th frame of the total 160 frames fringe patterns. To make a comparison of the previous method (global Fourier transform method mentioned in [4]) and the proposed method, four non-boundary points (marked by the stars, they are respectively blue star 1, red star 2, cyan star 3 and magenta star 4, as shown in Fig. 6(a)) on the simulated object are firstly chosen. Figure 6(b) draws four curves respectively produced by the four sets of eponymous pixels of the captured fringes. The colors and the numbers of the four curves match that of the four points in Fig. 6(a). Figure 6(c) shows the modulation distributions respectively calculated by the proposed method (the solid curves) and the previous method (the dotted curves). It shows that modulation distributions corresponding to non-boundary points of the object obtained by the two methods are almost the same. However, the modulation distribution of the points in the edge of the object will be different by the two methods. To clearly show the difference of the modulation distributions, a point in the edge of the object marked by the green star 5 in Fig. 6(a) is selected. Figure 6(d) shows modulation distributions calculated by the two methods, respectively. The positions of maximum modulation are respectively the point where the red dotted line crosses the red solid curve and that where the blue dotted line crosses the blue dash-dot curve. It shows that the number of the captured image with maximum value of modulation obtained by the proposed method is different from that by the previous method, because the filtering operation is used in the previous method, which filters out some useful information in the edge zone. So the previous method affects the measurement accuracy when use the look-up table to find the height distribution of the tested object.

 figure: Fig. 6

Fig. 6 (a) The 70th frame of fringe patterns; (b) Four curves respectively produced by the definite point of the captured fringe patterns; (c) Modulation distributions of the four curves respectively by the two methods; (d) Comparison of the modulation distribution by the two methods.

Download Full Size | PDF

The reconstruction and the error distributions by the proposed method are respectively shown in Figs. 7(a) and 7(b). The reconstruction and the error by the previous method based on the Fourier analysis for each fringes are shown in Figs. 7(c) and 7(d), respectively. For clarity, Fig. 8(a) shows the 132nd row of the tested object, the reconstructed result by the proposed method and that by the previous method. Figure 8(b) shows part result of Fig. 8(a) (from the 175th column to the 205th column in the 132nd row). For the previous method, the Fourier transform is a global transform method. When use it to deal with each frame of fringes, the filter will filter out the useful fundamental frequency, the strong possibility is that the higher frequency component which may contain the information of area exhibiting considerable variation in height such as the step. Therefore, noise around the edge of the first inner circle is very sharp, while the result by the proposed method is much better because the modulation is calculated using point-to-point algorithm eliminating the influence of the neighbor pixels. The root mean square errors by the two methods are respectively 0.214mm and 3.177mm.

 figure: Fig. 7

Fig. 7 (a) Reconstruction by the proposed method; (b) Error distribution by the proposed method; (c) Reconstruction by the previous method; (d) Error distribution by the previous method.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a) The 132nd row of the tested object, the reconstructed result of the same row by proposed method and that by the previous method; (b) The 175th column to the 205th column in the 132nd rows of the tested object, the reconstructed result of the same part of the object by proposed method and that by the previous method.

Download Full Size | PDF

4. Experiment

4.1 System calibration

To further verify the proposed method, an experiment is used in this section. Before the measurement, system calibration shown in Fig. 9 is processed in advance. In this process, N calibration planes are applied (N = 10 in our system calibration). The plane farthest to the beam splitter is regarded as the reference plane, whose height is D(N-1) = 0 (the relative distance mentioned in section 2 is the distance from the measured point to the reference plane D(N-1) = 0). The distance from the first plane D(0) to the last one D(N-1) is 72mm and the interval between the adjacent planes is 8mm. For each calibration plane, 298 frames of the fringe patterns are captured to calculate the modulation maps of the plane. For example, when the calibration plane is at the position D(0), the grating images continuously scan the plane and 298 frames fringe patterns with phase differences will be synchronously captured by CCD. Then move the calibration plane to the position D(1), and the grating returns to the initial position marked as the serial number t = 0, repeat the above scanning operation and the CCD camera synchronously capture the corresponding images. The system calibration is not completed until all calibration planes are processed. Similarly, for any calibration plane n, the images captured by the CCD camera during scanning can be marked by the serial number t corresponding to each position of the grating. Any pixel in a calibration plane n will have a greatest modulation value corresponding to its image plane marked by t(n)(max), so one can establish a relationship between the height and the number t(n) of the captured image with the greatest modulation value for each point, and a look-up table can be established by quadratic curve fitting. For point (x,y), the formula for this relationship can be expressed as

D(n)=a(x,y)+b(x,y)t(n)(max)+c(x,y)t2(n)(max)(n=0,1...,N1)
D(n) is height value for the point (x,y) on the calibration plane n, parameters a(x,y), b(x,y) and c(x,y) are respectively the coefficients of the quadratic curve fitting. t(n)(max) represents the serial number of the captured image when the modulation value is the greatest for these calibration planes. In practice, t(n) (max) can be accurate to decimal because interpolation is used. Each pixel in the images will has its unique formula with different values of parameters a(x,y), b(x,y) and c(x,y). If all of the pixels in the images find its corresponding parameters a(x,y), b(x,y) and c(x,y), the establishment of the look-up table is completed.

 figure: Fig. 9

Fig. 9 Diagram of calibration.

Download Full Size | PDF

Figure 10 shows an example of the relationship between the height values D(n) and the number t(n)(max) for pixel point (247,227) of the captured image. The blue stars are the values directly obtained by the ten calibration planes for this point, and the red line represents the interpolating fitting line for the obtained data. Each point in the images will has its unique line similar to Fig. 10.

 figure: Fig. 10

Fig. 10 Relationship between the position of the grating and the height values.

Download Full Size | PDF

To estimate the precision of our method, we measure two flat planes respectively corresponding to height values 44 mm and 36mm after calibration. Finding the height of the plane by searching the look-up table stored in the computer. The reconstructions of the two planes, shown in Figs. 11(a) and 11(b), are completed. The error distributions for the 300th row of the two reconstructed planes are respectively shown in Figs. 11(c) and 11(d). The mean heights of the planes are respectively 44.22mm and 35.69mm. The root mean square errors for the two planes are 0.24089mm and 0.26186mm, respectively.

 figure: Fig. 11

Fig. 11 (a) Reconstruction of the plane with height 44mm; (b) Reconstruction of the plane with height 36mm; (c) The error distribution for the 132nd row of the reconstructed plane with height 44mm; (d) The error distribution for the 132nd row of the reconstructed plane with height 36mm.

Download Full Size | PDF

4.2 Experiment and result discussion

The proposed method is also verified by measuring an object with a hole in the center and two steps on the surface shown in Fig. 12. The height of the higher step is 57.75mm, the height of the lower step is 25.25mm, and the height difference of the two steps is 32.5mm. Both height values of the two steps are measured by a vernier caliper. Fringe pattern are captured by a CCD camera (Baumer sxc10) with 1024 × 1024 image resolution through a beam splitter. The size of the captured images are all cut to be 590 × 590 pixels to decrease calculation time, the grating is 2 lines/mm. Taking into account the noise and the nonlinear effects of the CCD camera, the proposed method and the previous method are adopted to make a comparison.

 figure: Fig. 12

Fig. 12 The measured object.

Download Full Size | PDF

Similarly, 298 frames of the captured fringe patterns are used to calculate the modulation maps of the measured object. The 100th frame of the images is shown in Fig. 13(a). Three non-boundary points (marked by the stars, which respectively are red star 1, magenta star 2 and blue star 3, as shown in Fig. 13(a)) on the object are firstly selected to make a comparison between the proposed method and the previous method. Figure 13(b) shows three curves produced by the three sets of eponymous pixels of the captured fringes. The colors and the numbers of the three curves match that of the three points in Fig. 13(a).The corresponding modulation distributions obtained by the proposed method (the solid curve) and the previous method (the dotted curve) are shown in Fig. 13(c). Figure 13(d) shows the modulation distributions for the set of eponymous pixels (the cyan star 4 in Fig. 13(a)) in edge of the object respectively obtained by the two methods to make a comparison. The maximum values of the modulation are respectively the point where the red dotted line crosses the red solid curve and that where the blue dotted line crosses the blue dash-dot curve. It illustrates that the number n of the captured image with maximum value of modulation obtained by the proposed method is not the same as that by the previous method, so the height distributions of the tested object applying the look-up table by the two methods are different.

 figure: Fig. 13

Fig. 13 (a) The 100th frame of the images; (b) Three curves produced by three points (marked in (a)) of the captured fringes; (c) The modulation distributions obtained by the two methods; (d) Comparison of the modulation distribution by the two methods.

Download Full Size | PDF

Figures 14(a) and 14(b) are the results reconstructed by the two methods, respectively. For clarity, Fig. 14(c) shows the 300th row of the two reconstructions, and Fig. 14(d) shows the enlarged part from the 120th column to 180th column in 300th row of the two reconstructions. The height differences of the two steps by the proposed method and the previous method are 32.809mm and 33.663 mm respectively, and the errors for the two methods are respectively 0.309mm and 1.163mm. It shows that the proposed method has higher measurement accuracy.

 figure: Fig. 14

Fig. 14 (a) Reconstruction by the proposed method; (b) Reconstruction by the previous method; (c) The 300th rows of the reconstructions by the two methods; (d) the 120th column to the 180th column in the 300th rows of the reconstructions by the two methods.

Download Full Size | PDF

This uniaxial 3D shape measurement technique employing our designing system has some merits and an improvement as following:

  • (1) Based on the auto-synchronous phase shifting and vertical scanning technique, modulation values are calculated from the curve produced by a definite point of the captured fringes along scanning direction instead of doing a global processing for each fringe by Fourier transform and filtering operation, so the detail of the tested object especially for one with considerable variation in height will not be lost.
  • (2) In our proposed scheme, small aperture is used for CCD, and the variation of defocus amount for incoherent imaging occurs slowly. What’s more, system calibration is applied, so the defocus of the imaging lens can be eliminated even though the MTFs of the both projection lens and imaging lens are multiplied, which is different from other scheme such as that in Ref. [6] that the fringe contrast is decreased twice more quickly by virtue of the effect similar to confocal microscopy, and the defocus of the imaging lens must be taken into account.
  • (3) Beside the approach based on above technique introduced in the paper, the phase shifting technique can also be applied to calculate the modulation maps from the fringes captured by the new design system of measurement. For the modulation profilometry based on phase shift algorithm, the modulation values of the fringe pattern can be calculated by employing images in front and the back of this fringe pattern. For example, when modulation value of the tth frame of fringe pattern is calculated, the (t-1)th, (t)th, (t + 1)th and (t + 1)th frame of fringe patterns will be used in the calculation by four-step phase-shifting algorithm.
  • (4) The curve produced by the same point of the captured fringes can be accepted as a new scan wavelength, so if one attempts to improve the measurement accuracy, a shorter step length of the movement for grating can make the goal come true because it reduce the scan wavelength, but time consuming should be taken into account.
    • (4) In our system, the movement of the grating is realized by an open-loop step-motor control system, so out-of-step and mechanical error caused by step motor will directly affect the precision of the measurement system. If closed-loop step-motor control system is applied, the measurement accuracy can be improved.

4. Conclusion

In this paper, a new modulation measuring profilometry with auto-synchronous phase shifting and vertical scanning is proposed. In our system, 1D precision translation platform is used to drive the grating to move at a certain angle to optical axis for achieving the purposes of phase-shifting and vertical scanning. Unlike the previous modulation measurement profilometry using global processing techniques based on Fourier transform technique for analyzing each captured fringe pattern, this proposed method is a kind of point-to-point processing, which calculates modulation values from a curve produced by the definite point (x,y) of the captured images. It not only improves the measurement accuracy, but also saves the measurement time. In addition, application of our designed measurement system, phase shifting technique can also be used to calculate modulation values from the captured fringe patterns with phase difference to reconstruct the tested object.

Acknowledgments

The authors acknowledge the support by National Key Scientific Apparatus Development project (2013YQ49087901) and the National Natural Science Foundation of China (NSFC) (61177010).

References and links

1. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]   [PubMed]  

2. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry: a phase mapping approach,” Appl. Opt. 24(2), 185–188 (1985). [CrossRef]   [PubMed]  

3. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]   [PubMed]  

4. X. Y. Su, L. K. Su, and W. S. Li, “A New Fourier transform profilometry based on modulation measurement,” Proc. SPIE 3749, 438–439 (1999). [CrossRef]  

5. L. K. Su, X. Y. Su, W. S. Li, and L. Q. Xiang, “Application of Modulation Measurement Profilometry to Objects with Surface Holes,” Appl. Opt. 38(7), 1153–1158 (1999). [CrossRef]   [PubMed]  

6. M. Takeda, T. Aoki, Y. Miyamoto, H. Tanaka, R. W. Gu, and Z. B. Zhang, “Absolute three-dimensional shape measurements using coaxial and coimage plane optical systems and Fourier fringe analysis for focus detection,” Opt. Eng. 39(1), 61–68 (2000). [CrossRef]  

7. T. Yoshizawa, T. Shinoda, and Y. Otani, “Uniaxis rangefinder using contrast detection of a projected pattern,” Proc. SPIE 4190, 115–122 (2001). [CrossRef]  

8. Y. Mizutani, R. Kuwano, Y. Otani, N. Umeda, and T. Yoshizawa, “Three-dimensional shape measurement using focus method by using liquid crystal grating and liquid varifocus lens,” Proc. SPIE 600, 60000J (2005). [CrossRef]  

9. Y. Otani, F. Kobayashi, Y. Mizutani, S. Watanabe, M. Harada, and T. Yoshizawa, “Uni-axial measurement of three-dimensional surface profile by liquid crystal digital shifter,” Proc. SPIE 7790, 77900A (2010). [CrossRef]  

10. Y. Xu, L. Ekstrand, and S. Zhang, “Uniaxial 3-D shape measurement with projector defocusing,” Proc. SPIE 81330, 81330M (2011). [CrossRef]  

11. M. Subbarao and N. Gurumoorthy, “Depth recovery from blurred edges,” Proc. IEEE 1988, 498–503 (1988).

12. D. Chen, J. Schmit, and M. Novak, “Real-time scanner error correction in white light interferometry,” Proc. SPIE 9276, 92760I (2014). [CrossRef]  

13. (Conference Proceedings) Authors: Proc. SPIE 9276, Optical Metrology and Inspection for Industrial Applications III, 92760I (13 November 2014); doi: [CrossRef]  .

14. K. Korner, R. Windecker, M. Fleischer, and H. J. Tiziani, “One-grating projection for absolute three-dimensional profiling,” Opt. Eng. 40(8), 1653–1660 (2001). [CrossRef]  

15. S. Chen, A. W. Palmer, K. T. Grattan, and B. T. Meggitt, “Digital signal-processing techniques for electronically scanned optical-fiber white-light interferometry,” Appl. Opt. 31(28), 6003–6010 (1992). [CrossRef]   [PubMed]  

16. S. S. Chim, G. S. Kino, and G. S. Kino, “Three-dimensional image realization in interference microscopy,” Appl. Opt. 31(14), 2550–2553 (1992). [CrossRef]   [PubMed]  

17. P. Sandoz, R. Devillers, and A. Plata, “Unambiguous profilometry by fringe-order identification in white-light phase-shifting interferometry,” J. Mod. Opt. 44(3), 519–534 (1997). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 The setup of the proposed 3-D surface measurement.
Fig. 2
Fig. 2 The decomposition for the movement of grating.
Fig. 3
Fig. 3 Modulation distribution of eponymous pixels in the captured images.
Fig. 4
Fig. 4 (a) A series of fringe patterns captured by CCD camera; (b) The intensity distribution of a definite point (x,y) in the captured fringes.
Fig. 5
Fig. 5 Simulated object
Fig. 6
Fig. 6 (a) The 70th frame of fringe patterns; (b) Four curves respectively produced by the definite point of the captured fringe patterns; (c) Modulation distributions of the four curves respectively by the two methods; (d) Comparison of the modulation distribution by the two methods.
Fig. 7
Fig. 7 (a) Reconstruction by the proposed method; (b) Error distribution by the proposed method; (c) Reconstruction by the previous method; (d) Error distribution by the previous method.
Fig. 8
Fig. 8 (a) The 132nd row of the tested object, the reconstructed result of the same row by proposed method and that by the previous method; (b) The 175th column to the 205th column in the 132nd rows of the tested object, the reconstructed result of the same part of the object by proposed method and that by the previous method.
Fig. 9
Fig. 9 Diagram of calibration.
Fig. 10
Fig. 10 Relationship between the position of the grating and the height values.
Fig. 11
Fig. 11 (a) Reconstruction of the plane with height 44mm; (b) Reconstruction of the plane with height 36mm; (c) The error distribution for the 132nd row of the reconstructed plane with height 44mm; (d) The error distribution for the 132nd row of the reconstructed plane with height 36mm.
Fig. 12
Fig. 12 The measured object.
Fig. 13
Fig. 13 (a) The 100th frame of the images; (b) Three curves produced by three points (marked in (a)) of the captured fringes; (c) The modulation distributions obtained by the two methods; (d) Comparison of the modulation distribution by the two methods.
Fig. 14
Fig. 14 (a) Reconstruction by the proposed method; (b) Reconstruction by the previous method; (c) The 300th rows of the reconstructions by the two methods; (d) the 120th column to the 180th column in the 300th rows of the reconstructions by the two methods.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

I(x,y,t)= R(x,y) A 2 { I 0 (t)+ C 0 (x,y,t)cos[ 2π f 0 x+ Φ 0 (x,y)+2πt/N ] } (t=0,1...T1)
I d (x,y,t;δ)=H(x,y)*I(x,y,t)
H(x,y)= 1 2π σ H 2 e x 2 + y 2 2 σ H 2
I d (x,y,t;δ)= R(x,y) A 2 { I 0 (t)+ C 0 (x,y,t) e 1 2 f 0 2 σ H 2 cos[ 2π f 0 x+ Φ 0 (x,y)+2πt/N ] }
M(x,y,t;δ)=R(x,y) M 0 (x,y,t) e f 0 2 2 ( Cδ R L l ) 2
I(t)= R A 2 { I 0 (t)+ C 0 (t) e 1 2 f 0 2 σ H 2 cos[ Φ+2πt/N ] } (t=0,1...T1)
G ( ζ ) (x,y) = G 0 ( ζ ) (x,y) + G 1 ( ζ ) (x,y) + G 1 ( ζ ) (x,y)
B(t)= 1 2 C 1 ( t ) e i( Φ )
D(n)=a(x,y)+b(x,y)t (n) (max) +c(x,y) t 2 (n) (max) (n=0,1...,N1)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.