Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed and high-accuracy 3D surface measurement using a mechanical projector

Open Access Open Access

Abstract

This paper presents a method to achieve high-speed and high-accuracy 3D surface measurement using a custom-designed mechanical projector and two high-speed cameras. We developed a computational framework that can achieve absolute shape measurement in sub-pixel accuracy through: 1) capturing precisely phase-shifted fringe patterns by synchronizing the cameras with the projector; 2) generating a rough disparity map between two cameras by employing a standard stereo-vision method using texture images with encoded statistical patterns; and 3) utilizing the wrapped phase as a constraint to refine the disparity map. The projector can project binary patterns at a speed of up to 10,000 Hz, and the camera can capture the required number of phase-shifted fringe patterns with 1/10,000 second, and thus 3D shape measurement can be realized as high as 10,000 Hz regardless the number of phase-shifted fringe patterns required for one 3D reconstruction. Experimental results demonstrated the success of our proposed method.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Due to high resolution, high speed, and flexility of implementation, 3D shape measurement techniques using digital fringe projection (DFP) and phase-shifting algorithms have been extensively used in science, engineering, as well as industrial applications [1].

Phase-shifting algorithms are widely used for 3D reconstruction through fringe analysis because of their accuracy, speed, and robustness to noise. However, a typical phase-shifting algorithm can only provide the phase value ranging from −π to π with 2π discontinuities, and such phase is often referred as wrapped phase. A phase unwrapping algorithm has to be implemented to remove those 2π discontinuities to create a smooth phase before 3D reconstruction. In the history, numerous phase unwrapping algorithms have been developed but they can be generally classified into two categories: spatial and temporal unwrapping algorithms. The spatial unwrapping algorithm analyzes the wrapped phase map itself and determines the number of 2π to be added to a point assuming the object surface is smooth at least on one path [2, 3]. Since spatial phase unwrapping does not require any additional information acquisition, such a method does not affect 3D data acquisition speeds. However, regardless the robustness of a spatial phase unwrapping algorithm, it is fundamentally limited to measure “smooth” object (e.g., no abrupt geometry changes, or isolated patches). Temporal phase unwrapping algorithms, in contrast, solve the discontinuity problem by acquiring additional information temporally. There are numerous temporal unwrapping algorithms including two- [4], or multi-frequency [5] phase-shifting, and the gray-coding plus phase-shifting [6]. Since temporal phase unwrapping algorithms do not require surface to be smooth, they can be used to measure arbitrary objects. However, the measurement speed is slowed down because the requirement of additional information acquisition at different time.

To address the speed limitations of conventional temporal phase unwrapping methods, researchers have attempted to simultaneously capture images from a different perspective and utilized the secondary camera images to provide cues for phase unwrapping [7–13]. Such methods use geometric constraints along with other knowledge of the system or the object to determine the number of 2π to be added for each point. Though successful, the phase unwrapping process is typically very slow in nature due to the backward and forward checking [14].

On the other hand, texture-based standard stereo-vision techniques have been well developed, and numerous global or semi-global stereo-matching algorithms [15–20] have developed to find the corresponding point. For example, the cost-based matching approach calculate the cost on the texture difference between a region near a source point on one image and the small region near a target point on the other image [21], and the corresponding point is determined by minimizing or maximizing the cost function. The stereo-matching algorithm typically generates a disparity map, a map that stores the pixel shift of a corresponding pairs from one camera image to the other camera image. The disparity map is then used to reconstruct 3D coordinates for each point based on the calibrated parameters of the stereo vision system. Since it only uses two cameras, the stereo-vision technique has obvious advantages: the simplicity of hardware configuration and straightforward calibration for the system [22]. However, hinging on natural texture variations to establish corresponding points, the accuracy of stereo-vision techniques varies from one object to another; and the measurement accuracy is not high if an object has no obvious distinctive features.

Lohry and Zhang [23] developed a 3D shape measurement technique that embraced the advantages of a standard stereo-vision technique (e.g., speed and simplicity) and those of the phase-shifting method (e.g., accuracy). In lieu of relying on nature texture images, such a method projects a locally unique statistical pattern along with the sinusoidal fringe patterns to increase to the robustness of stereo matching, and then uses the phase constraint to improve the accuracy of stereo matching. In particular, Lohry and Zhang employed the Efficient LArge-scale Stereo (ELAS) algorithm [24] to obtain rough disparity map and then a local linear regression approach to refine the disparity map for more accurate 3D reconstruction. Instead of using the linear regression method for refinement, Song et al. [25] developed an algorithm to refine the rough correspondence by interpolating two points: one point obtained from the standard stereo-vision algorithm that uses texture images and the other point obtained from phase map. Gai et al. [26] projected a separate speckle pattern to generate corresponding pairs and chose a proper correlation window size to remove outliers. Liu and Kofman [27] proposed to insert a background offset value into fringe patterns to provide clues for better correct corresponding point establishments. Furthermore, they used binary pattern to reduce the probability of incorrect corresponding point determination. These additional research effort could improve measurement speed, robustness, or/and accuracy, yet it is difficult for any of these approaches to achieve sub-pixel level accuracy.

As mentioned earlier, the DFP techniques has the advantage of speed, accuracy, flexibility, yet they all use a silicon-based digital projection devices such as liquid crystal display (LCD) or digital light processing (DLP) projectors. The silicon based projection devices can only operate properly within a limited spectrum light range and a certain level light power. For example, the DLP projection system uses the silicon-based digital micro-mirror device (DMD), if the wavelength of light is over 2,700 nm or below 300 nm, the transmission rate drops significantly [28].

To overcome the spectrum limitation of DFP techniques, Heist et al. [29] developed a 3D shape measurement system using two cameras and one mechanical projector with a rotating wheel. The rotating wheel has open and close slots to represent ON/OFF of the light. By properly defocusing lens, aperiodic sinusoidal patterns can be generated on the object surface. Since the projector does not use the silicon-based device for pattern generation, the light spectrum of the GOBO projector can be substantially broadened for applications such as 3D thermal imaging [30]. However, 3D shape measurement is realized by capturing a sequence phase-shifted fringe patterns, and then applying the stereo-matching algorithm to a pair of phase maps captured from different perspectives. Even though high speed data acquisition was realized, such a method did not precisely synchronize the projector with the camera and thus precise phase shifts cannot be ensured. Furthermore, the correspondence establishment still largely relies the computationally intensive backward-and-forward checking.

To further embrace the broad spectrum band of the mechanical projection technology, yet mitigate the limitations of the method developed by Heist et al. [29], this paper presents a method that can achieve both high-speed and high-accuracy 3D shape measurement. The major difference between the proposed method and that developed by Heist et al. [29] are: 1) we use a rotating wheel with equally spaced ON/FF structures that create periodical sinusoidal fringe patterns; 2) our cameras are precisely synchronized with the projector such that fringe patterns with precise phase shifts can be acquired for precise phase reconstruction; 3) we insert a transparent film with locally unique statistical patterns such that the stereo matching can be more efficiently established using a standard stereo-vision algorithm (e.g., ELAS algorithm); and 4) we develop a novel computation framework that can achieve subpixel stereo matching accuracy by using the phase constraint. Our prototype hardware system can accurately measure both single and multiple isolated objects, and the same hardware prototype system can potentially achieve 10,000 Hz 3D shape measurement speeds regardless the number of phase-shifted fringe patterns required for one 3D reconstruction.

Section 2 explains the principle behind the proposed method. Section 3 presents experimental results to verify the performance of the proposed method. Section 4 discusses the advantages and shortcomings of the proposed method, and finally Sec. 5 summarizes the paper.

2. Principles

2.1. Least squares algorithm

Due to their speed, accuracy, and resolution, phase-shifting based 3D shape measurement techniques have been extensively used in the field of 3D optical metrology [31]. Assume the intensity of k-th fringe image can be described as,

Ik(x,y)=I(x,y)+I(x,y)cos[ϕ(x,y)δk],
where I′(x, y) is the average intensity, I″(x, y) the intensity modulation, and ϕ(x, y) the phase to be solved for, and δk the phase-shift value. Theoretically, only three patterns are required to compute phase per pixel if the phase-shift values between fringe patterns are precisely known. Yet, using more fringe patterns can increase phase quality and, to various degree, tolerate phase error introduced by non-sinusoidality, imprecise phase shift, etc. The phase can be retrieved by applying a least square algorithm for equally phase-shifted fringe patterns (i.e., δk = 2πk/N),
ϕ(x,y)=tan1[k=1NIk(x,y)sinδkk=1NIk(x,y)cosδk].

Due to the use of an arctangent function in Eq. (2), the obtained phase value ranges from −π to π with 2π discontinuities. In general, a spatial or temporal phase unwrapping algorithm should be employed to remove 2π discontinuities and create smooth phase called unwrapped phase that can then be used for 3D reconstruction. As discussed before, the spatial phase unwrapping algorithm [2, 3] determines 2π discontinuities by assuming the smoothness of the surface and thus cannot be used to measure a single object with abrupt geometry changes or simultaneously measure multiple isolated objects. Temporal phase unwrapping algorithms, in contrast, can fundamentally eliminate the limitation of spatial phase unwrapping algorithms. Yet, they slow down the measurement speed by requiring the acquisition of additional images, which is not desirable for high-speed applications.

In the meantime, N phase-shifted fringe patterns can be used to obtain I′(x, y) by

I(x,y)=[k=1NIk(x,y)N].

I′(x, y) is often regarded as the texture image, the photograph of the object without fringe stripes. The texture image can be used for visualization, or providing additional information for analysis.

2.2. Phase-shifted sinusoidal fringe pattern generation with a mechanical projector

As discussed in Sec. 1, we developed a mechanical projector for high-speed 3D shape measurement. Figure 1 shows the system configuration. It includes a fiber light source, a rapidly rotating wheel, a statistical pattern transparent film, and two lenses (Lens 1 and 2). The fiber light passes through Lens 1 to create a bright area on the rotating wheel that is an optical chopper (Model: Thorlabs MC2000). The rotating wheel has two optical proximity sensors to sense the slot speed and generates a square wave that is represents the timing of the slot rotation. The rotating wheel has evenly spaced open and close slots that respectively pass through (ON) or block (OFF) the light to create structured patterns on the wheel. The lens (Lens 2) is projection lens that creates the image of the structured patterns on the object. The structured patterns become pseudo sinusoidal if the the object is properly placed at a out-of-focus depth position of Lens 2. Due to the use of a transparent film with statistical pattern on the optical path, the structured patterns formed on the object are modulated by such a statistical pattern. The rationale of adding statistical pattern will be detailed in Subsec. 2.3.1

 figure: Fig. 1

Fig. 1 Schematic diagram of the mechanical projection system.

Download Full Size | PDF

Since the wheel is rotating, phase-shifted fringe patterns are naturally generated if sampled at a different time. For high-accuracy measurement, capturing precisely phase-shifted fringe patterns is critical. In this research, we achieve high-accuracy and high-speed 3D shape measurement through precise synchronization between the projector and the camera. A high-performance microprocessor (Model: Raspberry pi 2) takes the square wave generated by the mechanical projector and calculates the trigger signal period as

Tc=Ts/N,
where Ts is the period of square wave representing the projection period of each slot, Tc is period of the trigger signal sent to the cameras, N is the number of phase-shifted fringe patterns necessary for one 3D reconstruction. Assuming the angular velocity of the rotating wheel is ω, and there are M number of evenly spaced slots on the wheel, the slot speed can be calculated as,
fs=ω2π×60×M
in Hz. The microprocessor generates a periodical pulse trigger signal that is sent to both high-speed cameras for image acquisition.

Figure 2 illustrates the timing chart of the proposed system. For example, if a slot speed fs = 1, 000 Hz, or Ts = 1 ms, is setup for the projector, the 1,000 Hz square wave is generated by the projector (i.e., 500µs ON and 500µs OFF slot time). If a three-step phase-shifting algorithm (N = 3) is used, the trigger signal period, from Eq. (4) is approximately Tc = 333µs; and thus three equally spaced pulses are generated within 1 ms. Similarly, if a four-step phase-shifting algorithm (N = 4) is used, four equally spaced pulses are generated within 1 ms period, or Tc = 250µs. In general, the camera exposure time texp should not be longer than the trigger pulse period, i.e., texpTc. This timing chart indicates that, as long as the camera’s sampling speed is high enough, the time required to capture one 3D frame is solely determined by the slot period Ts, and is independent of the number of phase-shifted fringe patterns required for one 3D reconstruction. In contrast, for a standard DFP system, if the projector’s image refreshing rate is fixed, the time required to capture one 3D frame is proportional to the number of fringe patterns required for 3D reconstruction, resulting in a slower measurement speed for higher accuracy measurement when more fringe patterns are required. Therefore, for high-speed and high-accuracy 3D shape measurement applications, our proposed technique is advantageous.

 figure: Fig. 2

Fig. 2 Timing diagram for the proposed high-speed 3D shape measurement system. Here Ts represents the period of the slot projection; Tc represents the period of the signal generated by the microprocessor to trigger both high-speed cameras; texp represents the exposure time of the camera; and N represents the number of phase-shifted fringe patterns required for one 3D reconstruction.

Download Full Size | PDF

2.3. Computational framework to achieve sub-pixel matching accuracy

Figure 3 shows the overall computational framework that we developed in this research. Two high-speed cameras precisely synchronized with the projector capture two sets of phase-shifted fringe patterns of the object from different perspectives. Applying Eq. (3) to the captured fringe images by each camera yields one texture image from each perspectives. We apply a standard stereo-matching algorithm, i.e., the ELAS algorithm [24], to generate a rough disparity map (a map representing the corresponding point) that can be used to reconstruct a coarse 3D shape of the object. Each set of phase-shifted fringe patterns can also generate a wrapped phase map by applying a phase-shifting algorithm. Theoretically, if a corresponding point is precise, the wrapped phase should be identical, and thus we apply this wrapped phase constraint to refine the rough disparity to achieve sub-pixel correspondence accuracy for higher accuracy 3D reconstruction.

 figure: Fig. 3

Fig. 3 Computational framework of our proposed 3D reconstruction method.

Download Full Size | PDF

This subsection details the proposed computational framework. We will also briefly explain the epipolar geometry that is critical to understand the proposed method.

2.3.1. Epipolar geometry

The standard stereo-vision algorithm typically uses the epipolar geometry to increase the robustness and speed of the stereo matching. Epipolar geometry essentially describes the intrinsic projective geometric constraints of the stereo-vision system. Figure 4 illustrates the fundamental concept of the epipolar geometry. Ol and Or describe the focal point for the left camera lens and the focal point for the right camera lens, respectively. El and Er are the points of intersection of the line OlOr¯ with two image planes, and these points are called epipoles. For a pixel Pl on the left camera image, the corresponding pixel on the right camera can be one of the points, P1, P2, P3, depending on the depth information in a 3D space. Even though each point corresponds to a different depth, all these points must fall on the same line on the right camera image Lr, which is called epipolar line. With similar geometric relationships, all points on the line Ll can only be matched to points on line Lr, and the plane formed by Pl, Ol, and Or is called epipolar plane. Therefore, applying epipolar geometry constraint essentially simplifies the complex two-dimensional searching problem to be a simple one-dimensional searching problem, and thus increases the searching speed and could enhance the robustness of the algorithm.

 figure: Fig. 4

Fig. 4 Illustration of epipolar geometry for a stereo-vision system.

Download Full Size | PDF

To further improve the correspondence searching speed, the stereo images are rectified such that the matching point only occurs on the same row; and this procedure is often referred as image rectification. Image rectification essentially translates and rotates the original images to align those epipolar lines (e.g., make Ll and Lr on the same line) using the stereo-vision system calibration data. Figure 5 shows an example of the image rectification. Figure 5(a) shows the original image captured by the left camera. After rectification, the image is slightly distorted, as shown in Fig. 5(b). Similarly, the image captured by the right camera can also be also rectified. Figure 5(c) shows the result after putting these two rectified images together, where the green horizontal lines (v1, v2, …) represent the epipolar lines. To search for the correspondence point for a given point the left camera image, one only have to search the points on the same green line on the right image.

 figure: Fig. 5

Fig. 5 Image rectification to facilitate correspondence searching. (a) Texture image captured by the left camera; (b) rectified image of (a); (c) a pair of rectified images for stereo matching, horizontal green lines (v1, v2, …) show representative epipolar lines.

Download Full Size | PDF

Even though the epipolar geometry makes the searching process simpler and more robust, it may not be enough to determine the exact corresponding pairs from the nature texture if the object does not have texture variations. To alleviate this problem, Lohry and Zhang [23] proposed to encode a statistical pattern into the phase-shifted patterns to make the projected texture locally unique. Different from the DLP projector which can generate those encoded images digitally, we printed out the statistical pattern on a transparent film and placed next to the spinning wheel on the optical path, as shown in Fig. 1.

2.3.2. Refinement algorithm

Although using epipolar geometry makes it easier to find the corresponding pairs, and encoding additional statistical pattern onto the projected fringe patterns further increases the robustness, one of the fundamental limitation of the standard stereo-vision algorithm is that it is difficult to achieve correspondence at a scale much smaller than the feature size, not to say at a sub-pixel level. As a result, applying the standard stereo-vision algorithm only gives coarse measurement. As explained earlier, the phase value obtained from phase-shifted patterns can be used as an additional constraint to improve correspondence accuracy and thus 3D shape measurement accuracy.

The step of using phase constraint to further improve correspondence point determination accuracy is called refinement. The proposed refinement algorithm is fundamentally based on the assumption that if the pairs correspond to each other, the phase values calculated by the images taken by both cameras must be the same. Therefore, the phase maps can be used to refine the rough disparity map. The following steps describe how phase is used to achieve sub-pixel corresponding accuracy:

  • Step 1: Find rough corresponding point using epipolar geometry. By employing the ELAS algorithm and using epipolar constraints, we determine the corresponding point on the right camera image for a given point on the left camera image acquired simultaneously. As described previously, the standard stereo-vision algorithm only provides a rough disparity map, or roughly determines the corresponding points.

Figure 6 illustrates an example of the roughly corresponding point Pr(u0r,v) on the right camera image for a given point Pl(ul, v) on the left camera image; and we call Point Pr(u0r,v) as the rough disparity point corresponding to Point Pl(ul, v). Apparently, the matching point must be on an epipolar line v.

 figure: Fig. 6

Fig. 6 Graphical illustrations of the proposed disparity map establishments on one epipolar line v. The first row image shows two rectified images; the second row image illustrates the rough corresponding point establishment using the standard stereo-vision algorithm on the rectified texture image; the third row image illustrates that first step of refinement by applying the phase constraint, e.g., the initial corresponding point Pr(u0r,v) is shifted by τ0 to Pr(u0r,+τ0,v); and the bottom row image shows the last refinement stage by subpixel interpolation, further move Pr(u0r,+τ0,v) by Δτ to the ultimate matching point Pr(ur, v).

Download Full Size | PDF

  • Step 2: Apply phase constraint to more precisely locate the corresponding point. The rough disparity map obtained from Step 1 can be refined by applying the phase constraint. Because the texture image and the phase image are perfectly aligned, the rough disparity point determined from the previous step has underline corresponding phase value for the same point. Assume for the disparity value between Point Pl(ul, v) and Point Pr(u0r,v) determined from texture image is d0=u0rul. The precisely matching point could be on the left or on the right of the (u0r,v). In this research, we search ±5 pixels and determine the more precisely corresponding point Pr(ur + τ0, v) by satisfying
    [ϕr(ur+τ0,v)ϕl(ul,v)][ϕr(ur+τ0,+1,v)ϕl(ul,v)]0,
    where τ0 is the additional disparity shift along ur direction on the epipolar line v.
  • Step 3: Determine sub-pixel accuracy correspondence through linear interpolation. After applying Step 2, now the true precisely corresponding point for Point Pr(ur, v) should be within [ur + τ0, ur + τ0 + 1]; and the subpixel shift Δτ can be determined by linearly interpolating these two points using
    Δτ=ϕl(ul,v)ϕr(ur+τ0,v)ϕr(ur+τ0+1,v)ϕr(ur+τ0,v).
    Combining the initial disparity d0, the additional shift after applying the phase constraint τ0, and the subpixel shift Δτ, we can calculate the precise disparity d between the left camera point and the corresponding right camera point as,
    d=d0+τ0+Δτ=u0rul+τ0+Δτ.

Once the precise disparity map is established, 3D coordinates for each pixel can be calculated using a standard stere-vision 3D reconstruction algorithm.

3. Experiment

We developed a prototype system to verify the performance of the proposed method. Figure 7 shows the photograph of the hardware setup. Our system consists of two high-speed cameras (Model: Phantom 340L) with each being attached to a lens (Model: SIGMA 24 mm f/1.8 EX DG), and one mechanical projector whose principle was described in Subsec. 2.2. We used an optical chopper system (Model: Thorlabs MC2000) to create structured patterns. The optical chopper system controls the slot speed of the rotating wheel (Model: Thorlabs MC1F100) that has 100 equally spaced slots. We used a halogen lamp and an optical fiber (Model: Thorlabs OSL2RFB) as the light source for the projection system. Additional two lenses (Model: Nikon AF 50mm f/1.8D, and Fuji Fujinon 75mm f/1.8) are placed on the optical path to decide the field of view and the number of periodical fringes. Two cameras were used to capture the projected fringe patterns from slightly different perspectives. A microprocessor (Model: Raspberry pi 2) and the function generator (Model: Tektronix AFG 3022B) were used to generate the external signal to precisely synchronize the cameras with the projector. We printed out a statistical pattern on a transparent film and positioned right behind the rotating wheel to facilitate the correspondence determination by a standard stereo-vision algorithm. We set the camera resolution to be 1024 × 1024 for all static object experiments, and 512 × 512 for dynamically moving object experiments.

 figure: Fig. 7

Fig. 7 Photograph of experimental hardware system setup.

Download Full Size | PDF

We first measured a sphere (i.e., a ping-pong ball) to evaluate the measurement accuracy of our proposed method. Figures 8 and 9 show the experimental results. Figure 8(a) shows one of the three phase-shifted fringe patterns captured by the left camera, and Fig. 8(b) shows the texture image by averaging three phase-shifted fringe images. The texture image contains a statistical pattern since the light goes through the printed transparent film with a statistical pattern on. The wrapped phase was also obtained from these three phase-shifted fringe images, as shown in Fig. 8(c). The same procedures were applied to those fringe patterns captured by the other camera at the same time. Figures 8(d)–8(f) show the corresponding results.

 figure: Fig. 8

Fig. 8 Measurement results of a ping-pong ball. (a) One of three phase-shifted fringe patterns captured by the left camera; (b) the texture image obtained by averaging three fringe patterns captured by the left camera; (c) wrapped phase map from those images captured by the left camera; (d)–(f) corresponding images for the right camera.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Measurement results of a ping-pong ball shown in Fig. 8. (a) 3D reconstruction using the rough disparity map generated by the ELAS algorithm; (b) 3D reconstruction result from refined disparity map after applying our proposed refinement algorithm; (c) overlays of the ideal fitted sphere and the measured data; (d) difference map between the fitted ideal sphere and the measured data (rms error of approximately 6 µm, and the standard deviation of approximately 78 µm).

Download Full Size | PDF

We applied the ELAS algorithm to those two texture images shown in Figs. 8(b) and 8(e) to generate a rough disparity map, from which we reconstructed one 3D model as shown in Fig. 9(a). This figure shows that even though the sphere surface is smooth, the reconstructed 3D geometry from the rough disparity map rough. We further employed our proposed disparity map refinement computational framework to generate a more accurate disparity map. Figure 9(b) shows the 3D result reconstructed from the refined disparity map, showing obvious improvements over the result without employing our proposed computational framework.

We further evaluated the measurement accuracy by comparing our measured result with an ideal sphere. We adopted a least square algorithm to fit the measured data with an ideal sphere having a diameter of 40 mm (the size of ping-pong ball). Figure 9(c) shows an image that overlays the ideal fitted sphere with the measured data. We then took the difference between the ideal sphere and the measured data, and Fig. 9(d) shows the result. The mean measurement error is approximately 6 µm, and the standard deviation of the measurement error is approximately 78 µm, demonstrating that our proposed method can indeed achieve high-accuracy measurement.

We also measured a statue with complex surface geometry to further verify the performance of our proposed method. Figure 10(a) shows the photograph of the statue we measured, Fig. 10(b) shows one of three phase-shifted fringe images, and Fig. 10(c) shows the texture image calculated by averaging three phase-shifted fringe images. Similarly, we applied the ELAS algorithm to generate a rough disparity map that was used to reconstruct the rough 3D geometry, shown in Fig. 10(d). Figure 10(e) shows the 3D result reconstructed from the disparity map obtained by applying the phase constraint to determine the pixel that has the closest phase value. Figure 10(f) shows the final result after employing the proposed sub-pixel level refinement procedure. Comparing with the result shown in Fig. 10(d), the result obtained from our proposed method, once again, substantially improved measurement quality.

 figure: Fig. 10

Fig. 10 Measurement results of a statue with complex geometry. (a) Photograph of the sculpture; (b) one of three phase-shifted fringe patterns captured by the left camera; (c) the corresponding texture image; (d) 3D reconstruction using the rough disparity map generated by the ELAS algorithm; (e) 3D reconstruction by applying the phase constraint; (f) 3D reconstruction using our proposed sub-pixel level refinement algorithm.

Download Full Size | PDF

Figure 11 shows the closed-up views of those results shown in Fig. 10 to better visualize the differences. This figure clearly shows that the proposed sub-pixel level refinement algorithm indeed gives the best quality 3D data.

 figure: Fig. 11

Fig. 11 Closed-up views of the results from Fig. 10 around the mouth region. (a) Zoom-in view of Fig. 10(a); (b) zoom-in view of Fig. 10(d); (c) zoom-in view of Fig. 10(e); (d) zoom-in view of Fig. 10(f).

Download Full Size | PDF

Furthermore, we simultaneously measured two isolated objects to demonstrate that our proposed method can actually recover absolute phase for absolute 3D shape measurement. Figure 12(a) shows the photograph of these two objects. Figures 12(b) and 12(c) respectively shows the 3D result reconstructed from the rough disparity map, and that from the refined disparity map after applying our proposed computational framework. This experiment successfully demonstrated that our proposed method can indeed be used to measure absolute 3D geometry of multiple isolated objects.

 figure: Fig. 12

Fig. 12 Measurement results of multiple isolated objects. (a) Photograph of the objects; (b) 3D reconstruction using the rough disparity map; (d) 3D reconstruction using the refined disparity map.

Download Full Size | PDF

Lastly, we conducted an experiment to demonstrate the capability of high-speed 3D shape measurement. In this experiment, we set the camera resolution as 512 × 512, the exposure time as 105 µs, the slot speed as 3,000 Hz, and the N = 3 step phase-shifting algorithm for phase calculation (e.g., the cameras actually capture images at 3, 000 × 3 = 9, 000 Hz). Figure 13 shows a few representative 3D frames, and the associated Visualization 1 includes the entire sequence of recording. This experiments confirmed that our proposed method can be used for high-speed applications. It should be noted that although the slot speed of the projection system can go up to 10,000 Hz for our particular system setup, we chose 3,000 Hz for this experiment because the limited fiber light power.

 figure: Fig. 13

Fig. 13 Experimental results of measuring a rapidly moving object. Five representative frames from a sequence of recording shown in the associated with Visualization 1

Download Full Size | PDF

It is important to note that comparing with the static measurement results shown in Fig. 10(f), the high-speed measurement quality is obviously lower. We believe this reduced measurement quality could be introduced by the following factors: 1) the lower resolution cameras we used for high-speed measurement, i.e., 1024 × 1024 camera resolution was used for static measurements whilst 512 × 512 was used for high-speed measurements; 2) the camera noise is larger for high-speed measurements due to the reduced exposure time; 3) the fringe quality generated by the spinning wheel was low because the fringe stripes are very wide and are nonsinusoidal even after defocusing, as shown in Fig. 8(a), 8(d) and 10(b); 4) the phase quality produced from the left camera could be different from that produced by the right camera because of their different perspectives; and 5) the phase-based interpolation may not be precise due to the circular nature of the fringe patterns.

4. Discussion

The proposed high-speed and high-accuracy 3D shape measurement technique has the following major advantageous features:

  • High measurement speed. The proposed technique can always achieve the same 3D measurement speed as the speed of the projector (10,000 Hz in our case) regardless the number of phase-shifted fringe images required for one 3D reconstruction as long as the camera speed is high enough. In contrast, if the projector’s refresh rate is fixed, the measurement speed of a conventional DFP system decreases when the number of phase-shifted fringe images used for one 3D reconstruction increases.
  • High measurement accuracy. The proposed method can also achieve high measurement accuracy because 1) phase-shifted fringe patterns are precisely captured through precise synchronization between the cameras can the projector; 2) a statistical pattern is used to enhance the robustness of the initial rough disparity map calculation; and 3) a novel rough disparity map refinement computational framework is developed to achieve sub-pixel level disparity map determination accuracy.
  • Broad light spectrum. Since the mechanical projection device uses a metal plate with ON and OFF slots, a broad spectrum of light can be used to generate desired fringe patterns for 3D shape measurement. In contrast, the conventional DFP systems use a silicon-based projectors (e.g., LCD, or DLP), and the spectrum of light is greatly restricted to the region that silicon can properly function.

However, the proposed method is not trouble free. Unlike the state-of-the-art DLP based high-speed 3D shape measurement techniques, this proposed method requires two high-speed cameras to realize absolute 3D shape measurement mainly because it is more difficult to precisely calibrate the mechanical projection device than the DLP projector, and is also more difficult to generate different frequency fringe patterns for absolute phase recovery.

5. Summary

This paper has presented a method for high-speed and high-accuracy 3D shape measurement using a mechanical fringe projection system. In lieu of using a silicon-based projection device such as a DLP or LCD projector, the proposed fringe projection system uses the metal-based pattern generation mechanism that allows the use of a much broader light spectrum of light for 3D shape measurement. The proposed technique achieved both high-speed and high-accuracy 3D shape measurement through precisely synchronizing the cameras with projector, and developing a novel computational framework for sub-pixel disparity map generation. We developed a prototype hardware system that can accurately measure both single and multiple isolated objects. The same hardware prototype system could potentially achieve 10,000 Hz 3D shape measurement speeds regardless the number of phase-shifted fringe patterns required for one 3D reconstruction.

Funding

National Science Foundation (NSF) (CMMI-1531048).

References and links

1. S. Zhang, High-speed 3D Imaging with Digital Fringe Projection Technique, 1st ed. (CRC Press, 2016).

2. D. C. Ghiglia and M. D. Pritt, Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software(John Wiley and Sons, 1998).

3. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Laser Eng. 42, 245–261 (2004). [CrossRef]  

4. K. Creath, “Step height measurement using two-wavelength phase-shifting interferometry,” Appl. Opt. 26, 2810–2816 (1987). [CrossRef]   [PubMed]  

5. Y.-Y. Cheng and J. C. Wyant, “Multiple-wavelength phase shifting interferometry,” Appl. Opt. 24, 804–807 (1985). [CrossRef]  

6. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: Analysis and compensation of the systematic errors,” Appl. Opt. 38, 6565–6573 (1999). [CrossRef]  

7. K. Zhong, Z. Li, Y. Shi, and C. Wang, “Analysis of solving the point correspondence problem by trifocal tensor for real-time phase measurement profilometry,” Proc. SPIE 8493, 8493 (2012).

8. Z. Li, K. Zhong, Y. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a full-resolution and high-speed 3d measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38, 1389–1391 (2013). [CrossRef]   [PubMed]  

9. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Laser Eng. 51, 1213–1222 (2013). [CrossRef]  

10. C. Brauer-Burchardt, P. Kuhmstedt, and G. Notni, “Phase unwrapping using geometric constraints for high-speed fringe projection based 3d measurements,” Proc. SPIE 8789, 878906 (2013). [CrossRef]  

11. R. Ishiyama, S. Sakamoto, J. Tajima, T. Okatani, and K. Deguchi, “Absolute phase measurements using geometric constraints between multiple cameras and projectors,” Appl. Opt. 46, 3528–3538 (2007). [CrossRef]   [PubMed]  

12. C. Brauer-Burchardt, P. Kuhmstedt, M. Heinze, P. Kuhmstedt, and G. Notni, “Using geometric constraints to solve the point correspondence problem in fringe projection based 3d measuring systems,” in Proc. 16th Intl Conference on Image Analysis and Proc.(2011), pp. 265–274.

13. Y. R. Huddart, J. D. R. Valera, N. J. Weston, and A. J. M. and, “Absolute phase measurement in fringe projection using multiple perspectives,” Opt. Express 21, 21119–21130 (2013). [CrossRef]   [PubMed]  

14. C. Jiang and S. Zhang, “Absolute phase unwrapping for dual-camera system without embedding statistical features,” Opt. Eng. 56, 094114 (2017). [CrossRef]  

15. V. Kolmogorov and R. Zabih, “Multi-camera scene reconstruction via graph cuts,” in “Euro Conf. Comp. Vis.” (2002), pp. 82–96.

16. J. Kostková and R. Sára, “Stratified dense matching for stereopsis in complex scenes,” in “Proc. Brit. Mach. Vis. Conf.”, (2003), pp. 339–348.

17. H. Hirschmuller, “Stereo processing by semiglobal matching and mutual information,” IEEE Trans. Patt. Analy. Mach. Intellig. 30, 328–341 (2008). [CrossRef]  

18. F. Besse, C. Rother, A. W. Fitzgibbon, and J. Kautz, “Pmbp: Patchmatch belief propagation for correspondence field estimation,” Intl J. Comp. Vis. 110, 2–13 (2013). [CrossRef]  

19. S. Xu, F. Zhang, X. He, X. Shen, and X. Zhang, “Pm-pm: Patchmatch with potts model for object segmentation and stereo matching,” IEEE Trans. Image Proc. 24, 2182–2196 (2015). [CrossRef]  

20. S. Zhu and L. Yan, “Local stereo matching algorithm with efficient matching cost and adaptive guided image filter,” Visual Computer 33, 1087–1102 (2017). [CrossRef]  

21. T. Kanade and M. Okutomi, “A stereo matching algorithm with an adaptive window: Theory and experiment,” IEEE Trans. Patt. Analy. Mach. Intellig. 16, 920–932 (1994). [CrossRef]  

22. B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3d structured light imaging techniques and potential applications to intelligent robotics,” Intl. J. Intelligent Robotics Appl. 1, 86–103 (2017). [CrossRef]  

23. W. Lohry, V. Chen, and S. Zhang, “Absolute three-dimensional shape measurement using coded fringe patterns without phase unwrapping or projector calibration,” Opt. Express 22, 1287–1301 (2014). [CrossRef]   [PubMed]  

24. A. Geiger, M. Roser, and R. Urtasun, “Efficient large-scale stereo matching,” Asian Conf. Computer Vis. 6492, 25–38 (2011).

25. K. Song, S. Hu, X. Wen, and Y. Yan, “Fast 3d shape measurement using fourier transform profilometry without phase unwrapping,” Opt. Laser Eng. 84, 74–81 (2016). [CrossRef]  

26. S. Gai, F. Da, and X. Dai, “Novel 3d measurement system based on speckle and fringe pattern projection,” Opt. express 24, 17686–17697 (2016). [CrossRef]   [PubMed]  

27. X. Liu and J. Kofman, “High-frequency background modulation fringe patterns based on a fringe-wavelength geometry-constraint model for 3d surface-shape measurement,” Opt. Express 25, 16618–16628 (2017). [CrossRef]   [PubMed]  

28. D. Dudley, W. Duncan, and J. Slaughter, “Emerging digital micromirror device (dmd) applications,” Proc. SPIE 4985, 1 (2003).

29. S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed three-dimensional shape measurement using gobo projection,” Opt. Laser Eng. 87, 90–96 (2016). [CrossRef]  

30. A. Brahm, C. Rossler, P. Dietrich, S. Heist, P. Kühmstedt, and G. Notni, “Non-destructive 3d shape measurement of transparent and black objects with thermal fringes,” Proc. SPIE 9868, 98680C (2016). [CrossRef]  

31. D. Malacara, ed., Optical Shop Testing (John Wiley and Sons, 2007), 3rd ed. [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Visualization 1

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Schematic diagram of the mechanical projection system.
Fig. 2
Fig. 2 Timing diagram for the proposed high-speed 3D shape measurement system. Here Ts represents the period of the slot projection; Tc represents the period of the signal generated by the microprocessor to trigger both high-speed cameras; texp represents the exposure time of the camera; and N represents the number of phase-shifted fringe patterns required for one 3D reconstruction.
Fig. 3
Fig. 3 Computational framework of our proposed 3D reconstruction method.
Fig. 4
Fig. 4 Illustration of epipolar geometry for a stereo-vision system.
Fig. 5
Fig. 5 Image rectification to facilitate correspondence searching. (a) Texture image captured by the left camera; (b) rectified image of (a); (c) a pair of rectified images for stereo matching, horizontal green lines (v1, v2, …) show representative epipolar lines.
Fig. 6
Fig. 6 Graphical illustrations of the proposed disparity map establishments on one epipolar line v. The first row image shows two rectified images; the second row image illustrates the rough corresponding point establishment using the standard stereo-vision algorithm on the rectified texture image; the third row image illustrates that first step of refinement by applying the phase constraint, e.g., the initial corresponding point P r ( u 0 r , v ) is shifted by τ0 to P r ( u 0 r , + τ 0 , v ); and the bottom row image shows the last refinement stage by subpixel interpolation, further move P r ( u 0 r , + τ 0 , v ) by Δτ to the ultimate matching point Pr(ur, v).
Fig. 7
Fig. 7 Photograph of experimental hardware system setup.
Fig. 8
Fig. 8 Measurement results of a ping-pong ball. (a) One of three phase-shifted fringe patterns captured by the left camera; (b) the texture image obtained by averaging three fringe patterns captured by the left camera; (c) wrapped phase map from those images captured by the left camera; (d)–(f) corresponding images for the right camera.
Fig. 9
Fig. 9 Measurement results of a ping-pong ball shown in Fig. 8. (a) 3D reconstruction using the rough disparity map generated by the ELAS algorithm; (b) 3D reconstruction result from refined disparity map after applying our proposed refinement algorithm; (c) overlays of the ideal fitted sphere and the measured data; (d) difference map between the fitted ideal sphere and the measured data (rms error of approximately 6 µm, and the standard deviation of approximately 78 µm).
Fig. 10
Fig. 10 Measurement results of a statue with complex geometry. (a) Photograph of the sculpture; (b) one of three phase-shifted fringe patterns captured by the left camera; (c) the corresponding texture image; (d) 3D reconstruction using the rough disparity map generated by the ELAS algorithm; (e) 3D reconstruction by applying the phase constraint; (f) 3D reconstruction using our proposed sub-pixel level refinement algorithm.
Fig. 11
Fig. 11 Closed-up views of the results from Fig. 10 around the mouth region. (a) Zoom-in view of Fig. 10(a); (b) zoom-in view of Fig. 10(d); (c) zoom-in view of Fig. 10(e); (d) zoom-in view of Fig. 10(f).
Fig. 12
Fig. 12 Measurement results of multiple isolated objects. (a) Photograph of the objects; (b) 3D reconstruction using the rough disparity map; (d) 3D reconstruction using the refined disparity map.
Fig. 13
Fig. 13 Experimental results of measuring a rapidly moving object. Five representative frames from a sequence of recording shown in the associated with Visualization 1

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I k ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) δ k ] ,
ϕ ( x , y ) = tan 1 [ k = 1 N I k ( x , y ) sin δ k k = 1 N I k ( x , y ) cos δ k ] .
I ( x , y ) = [ k = 1 N I k ( x , y ) N ] .
T c = T s / N ,
f s = ω 2 π × 60 × M
[ ϕ r ( u r + τ 0 , v ) ϕ l ( u l , v ) ] [ ϕ r ( u r + τ 0 , + 1 , v ) ϕ l ( u l , v ) ] 0 ,
Δ τ = ϕ l ( u l , v ) ϕ r ( u r + τ 0 , v ) ϕ r ( u r + τ 0 + 1 , v ) ϕ r ( u r + τ 0 , v ) .
d = d 0 + τ 0 + Δ τ = u 0 r u l + τ 0 + Δ τ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.