Abstract

A 3D shape measurement method in the presence of strong interreflections is presented. Traditional optical 3D shape measurement methods such as fringe projection profilometry (FPP) cannot measure regions that contain strong interreflections, which result in 3D shape measurement failure. In the proposed method, epipolar imaging with speckle patterns is utilized to eliminate the effects of interreflections and obtain the initial 3D shape measurement result. Regional fringe projection based on the initial measurement result is further applied to achieve high-accuracy measurement. Experimental results show that the proposed method can measure the regions that contain strong interreflections at a high accuracy.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recently, optical 3D shape measurement [1] is widely applied in many fields such as industry, art, and medicine. However, strong interreflections occur between different surfaces for the concave surfaces with glossy reflection such as metal and ceramics. Interreflections occur when the points of concave surfaces are not only directly illuminated by the light source but also illuminated by the indirect light reflects between the surfaces. Interreflections may cause 3D shape measurement failure for most optical 3D shape measurement methods, such as fringe projection profilometry (FPP) [2], which is a popular structured light 3D shape measurement method.

In recent years, several methods, which can measure 3D shape in the presence of interreflections, are proposed. Phase-shifting fringe projection can be combined with stereo vision to eliminiate errors caused by diffuse interreflections [3,4]. Phase disturbance caused by diffuse interreflections does not introduce errors because the phase disturbance is same from the different views, and the phase is only used to establish the correspondences between two cameras. However, this method is unsuitable for glossy surfaces, because glossy reflection changes with view direction. The errors caused by interreflections can be compensated by the initial inaccurate measurement results along with the estimated surface reflectance [5]. However, this method is ineffective for regions without initial measurement results.

High-frequency patterns are resistant to low-frequency interreflections because diffuse and low-frequency glossy interreflections can be nearly constant when high-frequency patterns are projected [6]. Low-frequency fringe patterns can be modulated by high-frequency patterns to separate direct light from indirect light [7]. High-frequency sinusoidal fringe patterns can be projected to make the indirect light remain constant [8–10]. Traditional Gray codes are replaced by high-frequency logical codes to eliminate the decoding errors [11]. High-frequency band-pass unstructured white noise patterns are also utilized to overcome interreflections in 3D scanning [12,13]. However, these methods fail in the presence of the strong and high-frequency glossy interreflections especially mirror interreflections.

The concave surfaces can be split into several regions, which do not cause interreflections inside each single region. Projector and camera are masked to separately measure these regions to avoid the effects of interreflections [14,15]. However, this method requires an initial 3D shape of the object such as the CAD model to generate the masks. Another type of methods identify the regions with errors due to interreflections and then an iterative approach is utilized to reduce the errors [16,17]. For each iteration, only the regions with errors are projected and reconstructed to reduce the number of points with errors. However, the iterations may not converge when the indirect light is too strong.

Structured light transport (SLT) [18] is another technique that can acquire 3D shape in the presence of interreflections. SLT is based on the observation that all the direct light and only a small amount of indirect light transports in the epipolar plane, whereas most of the indirect light transports outside the epipolar plane. If an epipolar line is projected, most of the indirect light can be removed when only the pixels corresponding to the epipolar line projected in the camera are reserved and the other pixels are discarded. The prototype consists of two synchronized digital micro-mirror devices (DMDs), where one is for projection and the other for masking the camera. Another implementation of SLT uses a laser scan projector and a rolling shutter camera [19]. The optical axes of the projector and camera are adjusted to be parallel to make the epipolar line horizontal. The exposure time of the camera is adjusted to match the scanline of the projector. In conclusion, only SLT, among the existing methods, can measure 3D shape in the presence of strong and high-frequency glossy interreflections without knowing the initial shape of the object but its measurement accuracy is limited.

In our previous research [15], regional fringe projection based on the known CAD model of the object is proposed to measure the 3D shape of the object in the presence of strong interreflections. However, this method does not work when the CAD model of the object is not available or the actual object shape differs greatly from its CAD model. The restriction can be removed by acquiring the initial 3D shape of the object.

In this research, a 3D shape measurement method in the presence of strong interreflections without known CAD model is proposed. Epipolar imaging with speckle patterns is applied to eliminate the effects of interreflections and obtain the initial low-accuracy measurement result. Regional fringe projection based on the initial measurement result is utilized to obtain a high-accuracy measurement result. Experimental results indicate that the proposed method can handle situation with strong interreflections, and the measurement accuracy is high.

The paper is organized as follows. Principles of the proposed method are explained in Section 2. Experiments of the proposed method are described in Section 3, and conclusions are presented in Section 4.

2. Principle

A typical FPP system consists of a camera and a projector, as illustrated in Fig. 1(a). In the traditional methods, the concave surfaces with strong interreflections [Fig. 1(b)] are required to be powdered. Otherwise, the interreflections between surfaces may lead to 3D shape measurement failure because the indirect fringe patterns introduce phase error, which is analyzed in Section 2.1.

 figure: Fig. 1

Fig. 1 FPP in the presence of interreflections. (a) Typical setup of a FPP system; (b) Fringe patterns with strong glossy interreflections between concave surfaces (blue solid line: direct light path; red dashed line: indirect light path).

Download Full Size | PPT Slide | PDF

A 3D shape measurement method in the presence of strong interreflections is proposed, which mainly consists of three steps as depicted in Fig. 2.

 figure: Fig. 2

Fig. 2 Framework of the proposed method.

Download Full Size | PPT Slide | PDF

2.1 Influence of strong glossy interreflections in FPP

In FPP, the phase-shifting sinusoidal fringe patterns are projected by the projector and the camera captures the deformed fringe patterns on the object. The intensities of four-step phase-shifting images Ii (i = 0, 1, 2, 3) can be described as

Ii(x,y)=IA(x,y)+IB(x,y)cos[ϕ(x,y)+π2i],
where IA(x,y) is the average intensity, IB(x,y) is the intensity modulation, and ϕ(x,y) is the wrapped phase that can be retrieved by
ϕ(x,y)=arctanI3(x,y)I1(x,y)I0(x,y)I2(x,y)
and then the phase ambiguity is removed by multi-wavelength heterodyne phase unwrapping method [20]. The unwrapped phase Φ1(x,y) can be obtained by
Φ1(x,y)=ϕ1(x,y)+2πround{12π[λ2λ2λ1(ϕ1(x,y)-ϕ2(x,y))ϕ1(x,y)]},
where λ1 and λ2 are the wavelengths (λ1 < λ2), ϕ1(x,y) and ϕ2(x,y) are the wrapped phase of two wavelengths. Finally, stereo matching is performed between the camera phase and the projector phase to obtain the 3D point cloud.

Assuming there are two overlapped fringe patterns, one is the direct fringe pattern referred to I1(x,y) and the other is the indirect fringe pattern referred to I2(x,y)

I1(x,y)=IA1(x,y)+IB1(x,y)cos[ϕ1(x,y)],
I2(x,y)=IA2(x,y)+IB2(x,y)cos[ϕ2(x,y)]=IA2+IB2cos[ϕ1(x,y)+Δϕ(x,y)],
where IA1(x,y) and IA2(x,y) are the average intensities of two fringe patterns, IB1(x,y) and IB2(x,y) are the intensity modulations of two fringe patterns, ϕ1(x,y) and ϕ2(x,y) are the phases of two fringe patterns, and Δϕ(x,y) is the phase difference between two fringe patterns. The overlapped fringe patterns can also be considered a sinusoidal fringe pattern that can be expressed as
Is(x,y)=IAs(x,y)+IBs(x,y)cos[ϕs(x,y)],
where ϕs(x,y) is the phase of synthetic fringe pattern that can be represented as
ϕs(x,y)=ϕ1(x,y)+Δϕs(x,y),
where Δϕs(x,y) is the phase error, which is defined by

tanΔϕs(x,y)=IB2(x,y)sin[Δϕ(x,y)]IB1(x,y)+IB2(x,y)cos[Δϕ(x,y)],

The analysis above indicates that the indirect fringe pattern introduces phase error Δϕs(x,y), which leads to 3D shape measurement failure. In practice, the reflected light may be composed of multiple indirect patterns, the phase error in this case is analyzed in [8].

In the proposed method, FPP is utilized to measure the regions which only contain direct light and determine the rows that require epipolar imaging. High dynamic range fringe acquisition (HDRFA) technique [21], which projects multi-intensity patterns and captures multi-exposure images, is utilized to obtain a complete measurement. Vertical and horizontal fringe patterns are projected, the phase retrieved by vertical fringe patterns is used to determine the correspondences between camera and projector associated with epipolar lines, and the phase retrieved by horizontal fringe patterns is used to exclude the regions with significant errors caused by interreflections. For each pair of match points, the horizontal phase of projector and camera is inconsistent in the regions which contain interreflections.

In order to reduce the number of rows that are required to perform epipolar imaging in the next section, the rows that contain interreflections and cannot obtain measurement result by FPP are required be identified as depicted in Fig. 3. Three criterions, namely, p, q, and r, are utilized to determine these regions as follows

{p:|Φc(x,y)Φp(x,y)|2πλ1>ε1q:i=1n[di(x,y)1ni=1ndi(x,y)]>ε2r:IB(x,y)>ε3,
where Φc(x,y) and Φp(x,y) represent the unwrapped phase of horizontal fringes in camera and projector obtained by Eq. (3), λ1 is the wavelength, and di(x,y) is the disparity of wavelength λi.

 figure: Fig. 3

Fig. 3 Determining the rows required to perform epipolar imaging. (a) Pixels that satisfy p r; (b) Pixels that satisfy q r; (c) Pixels that satisfy (p q) r; (d) Mask after morphological operations. The gray band labels the rows required to perform epipolar imaging.

Download Full Size | PPT Slide | PDF

The first criterion p implies that the y-direction difference between the camera and projector computed by horizontal phase is larger than ε1 pixels. The difference is introduced by the phase error caused by indirect light and ε1 depends on the calibration accuracy of the stereo vision system.

The second criterion q denotes that the match point of multi-wavelength fringes is inconsistent, which is caused by overlap of direct and indirect light. ε2 depends on the phase error introduced by the noise of imaging system.

The third criterion r indicates that the intensity modulation of the fringe pattern is larger than ε3, which is used to remove the low-modulation regions (such as backgrounds) where the noisy phase leads to wrong identification result.

Morphological operations are applied to remove isolated points and obtain the final mask, as illustrated in Fig. 3(d), and only the rows with white pixels are required to perform epipolar imaging.

2.2 Epipolar imaging with speckle patterns

Epipolar imaging, which is an imaging mode of SLT technique [18], can eliminate indirect light based on the observation that the exit point of indirect light is not the incident point. As shown in Fig. 4, the epipolar plane is formed by a point on the object and the optical centers of the camera and projector, and the two intersecting lines of image planes are the homologous epipolar lines. If a single epipolar line is projected, then this line can be identified directly in the captured image of camera by epipolar constraint. Direct light, which only reflects once, transports in the epipolar plane, whereas indirect light, which reflects twice or more, transports mainly outside the epipolar plane. Consequently, most of the indirect light can be eliminated when the camera only reserves the pixels corresponding to the projected epipolar line.

 figure: Fig. 4

Fig. 4 Epipolar imaging in the FPP system. The projected epipolar line becomes two lines in the captured image due to interreflections, and the line of direct light can be identified by epipolar constraint.

Download Full Size | PPT Slide | PDF

The procedure of epipolar imaging can be expressed as

I(x,y)=i=1nI˜i(x,y)=i=1nI(x,y)Mi(x,y),
I˜(x,y)=i=1nI˜i(x,y)=i=1nIi(x,y)Mi(x,y),
where I(x,y) is the original projected pattern, which is split into n masked projected patterns I˜1,,n(x,y) by epipolar lines. For each epipolar line li, the process can be described as depicted in Fig. 5, the original projected pattern I(x,y) is multiplied by the projector mask Mi(x,y) to obtain the masked projected pattern I˜i(x,y), and the original captured image Ii(x,y) that contains interreflections is multiplied by the camera mask Mi(x,y) to obtain the masked image I˜i(x,y) in which interreflections are eliminated. All masked image I˜1,,n(x,y) are summed together to form the fused image I˜(x,y). The epipolar lines to generate projector mask Mi(x,y) and camera mask Mi(x,y) can be obtained by
l=Fp,
l=FTp,
where F is the fundamental matrix of the stereo vision system formed by projector and camera, p=(x,y,1)Tand p=(x,y,1)Tare the homologous image points of projector and camera, and l=(a,b,c)T is the epipolar line in the projector image corresponding to camera image point p. Similarly, l=(a,b,c)T is the epipolar line in camera image corresponding to projector image point p [22].

 figure: Fig. 5

Fig. 5 The epipolar imaging procedure of a single epipolar line. The original projected pattern I(x,y) is multiplied by the projector mask Mi(x,y) to obtain the masked projected pattern I˜i(x,y), and the original captured image Ii(x,y) is multiplied by the camera mask Mi(x,y) to obtain the masked image I˜i(x,y) in which interreflections are eliminated.

Download Full Size | PPT Slide | PDF

In the existing literature, epipolar imaging is implemented by two types of prototypes. The first prototype consists of a DMD-based projector and a camera with a DMD in front of the sensor [18]. The second prototype consists of a laser scan projector and a rolling shutter camera [19]. Both prototypes of epipolar imaging presented above are difficult to implement. In the proposed method, epipolar imaging is implemented by the traditional camera and projector in FPP system. No additional optical and mechanical hardware is required. Although the imaging speed is slower than the existing epipolar imaging prototypes, the proposed implementation is flexible because the existing FPP system can be directly used to perform epipolar imaging without modification.

In practice, the distortion of the camera and projector lenses should be considered during epipolar imaging. Besides, it is difficult to fuse the final image if the slopes of epipolar lines are different. Consequently, an epipolar imaging procedure based on epipolar rectification and inverse epipolar rectification with distortion correction is introduced as depicted in Fig. 6.

 figure: Fig. 6

Fig. 6 Schematic diagram of epipolar rectification and inverse epipolar rectification. I and IV are the image planes after epipolar rectification; II and III are image planes before epipolar rectification of the projector and the camera, respectively. Original projected pattern is multiplied by the horizontal epipolar mask to acquire masked projected pattern (I); Inverse epipolar rectification is performed to obtain the actual projected pattern (I−II); The projected pattern is captured by the camera (II−III); Epipolar rectification makes the projected epipolar line in the captured image become horizontal again (III−IV).

Download Full Size | PPT Slide | PDF

  • (1) Masked projected pattern is obtained by multiplying original projected pattern and the horizontal epipolar mask in the image plane after epipolar rectification with distortion correction [23], where the epipolar line is horizontal and distortion is removed;
  • (2) Inverse epipolar rectification with distortion correction is performed on the masked projected pattern to obtain the actual projected pattern;
  • (3) The actual projected pattern is projected by the projector and captured by the camera;
  • (4) Epipolar rectification with distortion correction is performed on the original captured image to make the projected epipolar line become horizontal again. All rectified captured images are multiplied by the corresponding horizontal epipolar masks to eliminate the interreflections and summed together to form the final fused image.

Although traditional fringe patterns, such as multi-wavelength heterodyne phase-shifting sinusoidal fringes, can be utilized in epipolar imaging, a large number of patterns are required to be projected especially when HDRFA is applied for shiny surfaces. Consequently, the procedure of projection is time-consuming, thereby making the method impractical. Alternatively, fringe patterns can be replaced by digital speckle patterns displayed in Fig. 7 to reduce the number of patterns to be projected. For each single epipolar line, only two speckle patterns are projected, where one is black-white binary pattern in which the gray level of each pixel is either 0 or 255 [24], and the other is its opposite. Subtraction of the two patterns is conducted to eliminate the effects of spatially-varying reflectance. Black-white pattern and subtraction operation can partly resist to the overexposure of glossy surface, thus it is not necessary to perform multi-intensity projection and multi-exposure imaging.

 figure: Fig. 7

Fig. 7 Speckle patterns projected on the concave surfaces. (a) Full-field projected speckle pattern; (b) Epipolar imaging speckle pattern.

Download Full Size | PPT Slide | PDF

Stereo matching is performed between the images captured by the camera and the image projected by the projector to obtain the 3D point cloud. Epipolar rectification is applied to simplify the procedure of stereo matching. For each pixel in the camera, normalized cross-correlation (NCC) [22] is computed inside a square window along the corresponding horizontal epipolar line in the projected pattern as follows

NCC(x,y,d)=u,v[Ic(u,v)I¯c][Ip(ud,v)I¯p]{u,v[Ic(u,v)I¯c]2u,v[Ip(ud,v)I¯p]2}1/2,
where Ic(u,v) and Ip(ud,v) are the intensities of the captured image of the camera and projected pattern, respectively; I¯c and I¯p are the average intensities in the window; and d is the offset of the two images. Furthermore, d becomes the disparity when NCC value reaches the maximum and is larger than a threshold εNCC. NCC is robust to the variation of intensity and contrast, thus it is possible to perform stereo matching between the captured image and projected pattern.

However, the pattern captured by the camera is different from the original projected pattern due to the angle between the camera and projector, perspective projection, and surface orientation of the object. Consequently, stereo matching fails in several regions, as demonstrated in Fig. 8(c). The speckle pattern is multiplied by the epipolar mask in the space after epipolar rectification, thus the pattern in the rectified image does not deform in the vertical direction. In a certain pixel and its local area, the deformation can be regarded as skew, which can be described by only one parameter, that is, the angle of slope. Consequently, three projected patterns (0°, −25°, 25°) are utilized to perform the stereo matching, as exhibited in Fig. 8 and the maximal NCC value among the three patterns is considered as the corresponding point.

 figure: Fig. 8

Fig. 8 Stereo matching results with different projected speckle patterns. (a) Difference between captured positive and negative speckle patterns; (b) Original projected speckle pattern (0°); (c) Stereo matching result with (b); (d) Right-skewed projected speckle pattern (25°); (e) Left-skewed projected speckle pattern (−25°); (f) Stereo matching result with (b), (d), and (e) simultaneously.

Download Full Size | PPT Slide | PDF

2.3 Regional fringe projection

Although FPP and epipolar imaging with speckle patterns can obtain the complete measurement result in the presence of strong interreflections, the measurement accuracy of speckle patterns is low. In addition, the measurement result of FPP still remains some errors which cannot be removed by the horizontal phase. Consequently, regional fringe projection [15] can be introduced to improve measurement accuracy. The 3D point cloud retrieved by FPP and epipolar imaging, can be split into several regions, in which each region does not reflect light to itself. If only one of these regions is projected and imaged at a time, then interreflections can be eliminated. As shown in Fig. 9, regional fringe projection for each region can be represented similar to epipolar imaging as

I˜ϕ(x,y)=Iϕ(x,y)M(x,y),
I˜ϕ(x,y)=Iϕ(x,y)M(x,y),
where Iϕ(x,y) is the original fringe pattern, which is multiplied by the regional projector mask M(x,y) to obtain the regional masked fringe pattern I˜ϕ(x,y), and the original captured fringe image Iϕ(x,y) is multiplied by the regional camera mask M(x,y) to obtain the regional masked fringe image I˜ϕ(x,y). The masked fringe images of each region are summed together to obtain the fused fringe image.

 figure: Fig. 9

Fig. 9 Regional fringe projection and imaging of a single region; the original fringe pattern Iϕ(x,y) is multiplied by the regional projector mask M(x,y) to obtain the regional masked fringe pattern I˜ϕ(x,y), and the original captured fringe image Iϕ(x,y) is multiplied by the regional camera mask M(x,y) to obtain the regional masked fringe image I˜ϕ(x,y).

Download Full Size | PPT Slide | PDF

The segmentation of regions can be accomplished by normal clustering based on the observation that the surfaces with similar normals reflect little light to each other. The image points can be split into m regions to acquire regional camera masks M1,,m(x,y). The regional camera mask of the ith region Mi(x,y) can be obtained by

Mi(x,y)={0,ni,n(x,y)θ1,ni,n(x,y)<θ,
where θ is the normal threshold, ni is the surface normal of the ith region, and n(x,y) is the normal of the 3D point (X, Y, Z) corresponding to the 2D image point (x, y), which is computed as follows
n(x,y)=(PleftPright)×(PupPdown)(PleftPright)×(PupPdown),
where Pleft, Pright, Pup, and Pdown are the adjacent 3D points of central point in four directions. The computation is efficient because the adjacent points can be acquired directly in the image plane. Finally, the regional projector mask of the ith region Mi(x,y) is obtained by
Mi[xd(x,y),y]=Mi(x,y),
where d(x,y) is the disparity, which is the result of stereo matching in FPP and epipolar imaging.

For each region, only phase-shifting fringe patterns are projected and multi-wavelength patterns are not required because the phase ambiguity can be removed by the correspondences established by the FPP and speckle patterns. The unwrapped phase Φregion(x,y) can be obtained by

Φregion(x,y)=ϕregion(x,y)+2πround{Φprojector[xd(x,y),y]ϕregion(x,y)2π},
where ϕregion(x,y) is the wrapped phase of regional fringe projection, and Φprojector[xd(x,y),y] is the low-accuracy phase obtained by indexing the phase of the projector with an offset d(x,y), which is the disparity of FPP and epipolar imaging.

2.4 Pipeline of 3D shape measurement by epipolar imaging and regional fringe projection

The pipeline of the proposed 3D shape measurement method by epipolar imaging and regional fringe projection consists of following step:

Step 1. FPP with HDRFA is performed to measure the 3D point cloud in the regions without interreflections (Section 2.1);

Step 2. The regions that contain interreflections are identified by Eq. (9) to determine the rows which are required to perform epipolar imaging (Section 2.1);

Step 3. Epipolar imaging with speckle patterns is performed, and stereo matching between the projected speckle pattern and the captured image is utilized to acquire low-accuracy measurement result in the regions which contain interreflections (Section 2.2);

Step 4. Normal clustering based on the measurement results of FPP and epipolar imaging is conducted to generate regional projector masks and regional camera masks (Section 2.3);

Step 5. Regional fringe projection with HDRFA is performed to obtain high-accuracy measurement result (Section 2.3).

3. Experiments

The experimental system depicted in Fig. 10(a) consists of a monochrome CMOS camera (Basler acA1600-60gm) with a resolution of 1600 × 1200 and a DMD-based projector with a resolution of 1280 × 800. The measured objects contain an aluminium alloy workpiece with concave surfaces, as demonstrated in Fig. 10(b), a ceramic bowl, as displayed in Fig. 10(c), and an etalon formed by two steel gauge blocks, as exhibited in Fig. 10(d).

 figure: Fig. 10

Fig. 10 Measurement system and measured objects. (a) Measurement system formed by a camera and a projector; (b) Aluminium alloy workpiece with concave surfaces; (c) Ceramic bowl; (d) Etalon formed by two steel gauge blocks.

Download Full Size | PPT Slide | PDF

First, FPP with HDRFA is performed to measure the regions, which only contain direct light. Three wavelengths (13, 14, and 15) are utilized for multi-wavelength heterodyne phase unwrapping. The fused HDR fringe patterns are illustrated in Figs. 11(a) and 11(d), and the unwrapped phase is displayed in Figs. 11(b) and 11(e). The measurement result presented in Fig. 11(c) contains regions with significant errors caused by interreflections, which is excluded by the phase retrieved by horizontal fringes to obtain a clear measurement result, as depicted in Fig. 11(f).

 figure: Fig. 11

Fig. 11 3D shape measurement by fringe projection. (a) HDR vertical fringes; (b) Unwrapped phase retrieved by vertical fringes; (c) Measurement result that contains wrong data caused by interreflections; (d) HDR horizontal fringes; (e) Unwrapped phase retrieved by horizontal fringes; (f) Measurement result after excluding the wrong data by horizontal fringes.

Download Full Size | PPT Slide | PDF

Second, the rows required to perform epipolar imaging are determined by Eq. (9), where wavelength λ1 is 13, ε1 is set at 1.0, ε2 is 0.5, and the intensity modulation threshold ε3 is set at 20. Epipolar imaging is performed, the stride of epipolar lines is chosen to be 3 pixels for efficiency, and the width of epipolar line is 5 pixels, which is larger than the stride to avoid discontinuities in the fused image. The complete speckle pattern image demonstrated in Fig. 12(b) is composed of several captured epipolar images, such as those presented in Fig. 12(a). Stereo matching is conducted between the camera and projector by Eq. (14) to acquire the low accuracy measurement result in the regions that contain strong glossy reflections, as illustrated in Fig. 12(c). NCC threshold εNCC is configured as 0.5.

 figure: Fig. 12

Fig. 12 Epipolar imaging with speckle patterns. (a) Captured image of a single epipolar line; (b) Fused speckle pattern by epipolar imaging (only project the rows contain interreflections); (c) Measurement result of epipolar imaging (blue region).

Download Full Size | PPT Slide | PDF

Third, projector and camera masks for regional fringe projection are generated by normal clustering with a normal threshold θ of 25° in Eq. (17). High-accuracy measurement result demonstrated in Fig. 13(f) is obtained by regional fringe projection with HDRFA. The captured multi-intensity and multi-exposure regional fringe images exhibited in Figs. 13(d) and 13(e) are multiplied by the camera masks of each region to obtain the fused HDR fringe image, as depicted in Fig. 13(c). Wrapped phase is computed by fused fringe pattern images and unwrapped by the phase obtained from FPP and stereo matching of speckle patterns as describe in Eq. (20).

 figure: Fig. 13

Fig. 13 Regional fringe projection and imaging. (a) Camera mask of region 1; (b) Camera mask of region 2; (c) Fused HDR fringe image of two regions; (d) Captured fringe image of region 1; (e) Captured fringe image of region 2; (f) Measurement result of regional fringe projection.

Download Full Size | PPT Slide | PDF

Besides, a ceramic bowl is measured by the proposed method as exhibited in Fig. 14. There are several holes and ripples in the measurement result of FPP, holes can be filled by epipolar imaging and regional fringe projection, and the ripples caused by interreflections are eliminated in the measurement result of regional fringe projection.

 figure: Fig. 14

Fig. 14 Measurement results of a ceramic bowl. (a) HDR fringes in FPP; (b) Fused speckle pattern by epipolar imaging (only project the rows contain interreflections); (c) Segmented regions by normal clustering (one gray level represents one region); (d) Measurement result of FPP; (e) Measurement result of epipolar imaging (blue region); (f) Measurement result of regional fringe projection.

Download Full Size | PPT Slide | PDF

An etalon formed by two steel gauge blocks is measured to evaluate measurement accuracy of the proposed method. The error of fitting a plane of block can be used to evaluate measurement accuracy. As shown in Fig. 15, FPP fails in several regions and contains significant errors due to interreflections; epipolar imaging with speckle patterns can obtain the complete measurement result, but the accuracy remains low; only regional fringe projection can obtain both complete and high accuracy measurement.

 figure: Fig. 15

Fig. 15 Error distribution of fitting two planes. (a) FPP; (b) Epipolar imaging with speckle patterns; (c) Regional fringe projection.

Download Full Size | PPT Slide | PDF

The comparison results of fitting plane error between FPP, epipolar imaging, and regional fringe projection in the same regions as presented in Table 1, the results show that measurement accuracy of regional fringe projection is significantly higher than FPP and speckle patterns in the regions that contain interreflections.

Tables Icon

Table 1. Fitting plane error comparison (unit: mm)

4. Conclusions

In this research, a 3D shape measurement method in the presence of strong interreflections without known initial 3D shape is proposed. Epipolar imaging with speckle patterns is applied to obtain the initial low-accuracy measurement result. Regional fringe projection based on the initial measurement result is utilized to acquire high-accuracy measurement result. Experimental results indicate that the proposed method can measure the regions, which contain strong and high-frequency glossy interreflections that traditional FPP cannot measure. The proposed method is more flexible than other methods, such as SLT [18,19], which relies on special hardware and elaborate arrangements, because traditional FPP system can be directly used to measure 3D shape in the presence of strong interreflections without modification. The most time-consuming procedure, epipolar imaging, is accelerated by projecting speckle patterns and ignoring the rows without interreflections. Measurement accuracy is not reduced by stereo matching of speckle patterns because regional fringe projection is performed to guarantee that the accuracy is not lower than that of traditional FPP. Moreover, projecting multi-wavelength heterodyne fringes for phase unwrapping during regional fringe projection is unnecessary because the low-accuracy measurement result by epipolar imaging can be used to remove the phase ambiguity.

In the proposed method, normal clustering is utilized to split the regions used by regional fringe projection but the segmentation is based on the assumption that only single bounce indirect light exists. However, second bounce or multi-bounce indirect light cannot be entirely eliminated by the segmentation based on normal clustering. Therefore, regions can be split in a better way to avoid multi-bounce indirect light by analyzing light transport in the future works.

Funding

National Natural Science Foundation of China (NSFC) (61735003, 61475013); Aeronautical Science Foundation of China (2015ZE51054); Program for Changjiang Scholars and Innovative Research Team in University (IRT_16R02).

References and links

1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

2. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

3. C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005). [CrossRef]  

4. Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012). [CrossRef]   [PubMed]  

5. S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013). [CrossRef]  

6. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944. [CrossRef]  

7. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

8. M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

9. Y. Zhang, Z. Xiong, and F. Wu, “Unambiguous 3D measurement from speckle-embedded fringe,” Appl. Opt. 52(32), 7797–7805 (2013). [CrossRef]   [PubMed]  

10. S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015). [CrossRef]  

11. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3D scanning in the presence of global illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 713–720. [CrossRef]  

12. V. Couture, N. Martin, and S. Roy, “Unstructured light scanning to overcome interreflections,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1895–1902.

13. N. Martin, V. Couture, and S. Roy, “Subpixel scanning invariant to indirect lighting using quadratic code length,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 1441–1448. [CrossRef]  

14. Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005). [CrossRef]  

15. H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017).

16. Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009). [CrossRef]   [PubMed]  

17. X. Chen and Y. H. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recognit. 48(1), 220–230 (2015). [CrossRef]  

18. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253. [CrossRef]  

19. M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM, 2015), p. 35. [CrossRef]  

20. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

21. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

22. N. Pears, Y. Liu, and P. Bunting, 3D Imaging, Analysis and Applications (Springer-Verlag, 2012), Chap. 2.

23. D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

24. S. Gai, F. Da, and X. Dai, “Novel 3D measurement system based on speckle and fringe pattern projection,” Opt. Express 24(16), 17686–17697 (2016). [CrossRef]   [PubMed]  

References

  • View by:

  1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
    [Crossref]
  2. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  3. C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
    [Crossref]
  4. Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
    [Crossref] [PubMed]
  5. S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013).
    [Crossref]
  6. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
    [Crossref]
  7. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.
  8. M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.
  9. Y. Zhang, Z. Xiong, and F. Wu, “Unambiguous 3D measurement from speckle-embedded fringe,” Appl. Opt. 52(32), 7797–7805 (2013).
    [Crossref] [PubMed]
  10. S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015).
    [Crossref]
  11. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3D scanning in the presence of global illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 713–720.
    [Crossref]
  12. V. Couture, N. Martin, and S. Roy, “Unstructured light scanning to overcome interreflections,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1895–1902.
  13. N. Martin, V. Couture, and S. Roy, “Subpixel scanning invariant to indirect lighting using quadratic code length,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 1441–1448.
    [Crossref]
  14. Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
    [Crossref]
  15. H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017).
  16. Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009).
    [Crossref] [PubMed]
  17. X. Chen and Y. H. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recognit. 48(1), 220–230 (2015).
    [Crossref]
  18. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
    [Crossref]
  19. M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM, 2015), p. 35.
    [Crossref]
  20. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
    [Crossref]
  21. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
    [Crossref]
  22. N. Pears, Y. Liu, and P. Bunting, 3D Imaging, Analysis and Applications (Springer-Verlag, 2012), Chap. 2.
  23. D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.
  24. S. Gai, F. Da, and X. Dai, “Novel 3D measurement system based on speckle and fringe pattern projection,” Opt. Express 24(16), 17686–17697 (2016).
    [Crossref] [PubMed]

2017 (1)

H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017).

2016 (2)

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

S. Gai, F. Da, and X. Dai, “Novel 3D measurement system based on speckle and fringe pattern projection,” Opt. Express 24(16), 17686–17697 (2016).
[Crossref] [PubMed]

2015 (2)

X. Chen and Y. H. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recognit. 48(1), 220–230 (2015).
[Crossref]

S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015).
[Crossref]

2013 (2)

S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013).
[Crossref]

Y. Zhang, Z. Xiong, and F. Wu, “Unambiguous 3D measurement from speckle-embedded fringe,” Appl. Opt. 52(32), 7797–7805 (2013).
[Crossref] [PubMed]

2012 (2)

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

2010 (1)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

2009 (1)

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009).
[Crossref] [PubMed]

2005 (2)

C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
[Crossref]

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

2000 (1)

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Achar, S.

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM, 2015), p. 35.
[Crossref]

Agrawal, A.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3D scanning in the presence of global illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 713–720.
[Crossref]

Aliaga, D. G.

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009).
[Crossref] [PubMed]

Asundi, A.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Brown, G. M.

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Chen, F.

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Chen, Q.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Chen, T.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Chen, X.

X. Chen and Y. H. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recognit. 48(1), 220–230 (2015).
[Crossref]

Couture, V.

V. Couture, N. Martin, and S. Roy, “Unstructured light scanning to overcome interreflections,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1895–1902.

N. Martin, V. Couture, and S. Roy, “Subpixel scanning invariant to indirect lighting using quadratic code length,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 1441–1448.
[Crossref]

Da, F.

Dai, X.

Du, X.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Gai, S.

Gerken, B.

S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013).
[Crossref]

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Grossberg, M. D.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Gupta, M.

M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3D scanning in the presence of global illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 713–720.
[Crossref]

Hamilton, D.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Hao, Q.

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

Harding, K. G.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Hassebrook, L. G.

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

Heinze, M.

C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
[Crossref]

Herbort, S.

S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013).
[Crossref]

Hu, Q.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

Huang, L.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Jiang, H.

H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017).

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

Krishnan, G.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Kühmstedt, P.

C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
[Crossref]

Kutulakos, K. N.

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM, 2015), p. 35.
[Crossref]

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

Lau, D. L.

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

Lensch, H. P. A.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Li, D.

D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

Li, X.

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

Liu, K.

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

Martin, N.

V. Couture, N. Martin, and S. Roy, “Unstructured light scanning to overcome interreflections,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1895–1902.

N. Martin, V. Couture, and S. Roy, “Subpixel scanning invariant to indirect lighting using quadratic code length,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 1441–1448.
[Crossref]

Mather, J.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

Munkelt, C.

C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
[Crossref]

Narasimhan, S. G.

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM, 2015), p. 35.
[Crossref]

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3D scanning in the presence of global illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 713–720.
[Crossref]

Nayar, S. K.

M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Notnia, G.

C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
[Crossref]

O’Toole, M.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM, 2015), p. 35.
[Crossref]

Raskar, R.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Roy, S.

V. Couture, N. Martin, and S. Roy, “Unstructured light scanning to overcome interreflections,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1895–1902.

N. Martin, V. Couture, and S. Roy, “Subpixel scanning invariant to indirect lighting using quadratic code length,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 1441–1448.
[Crossref]

Schugk, D.

S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013).
[Crossref]

Seidel, H. P.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Song, M.

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Süße, H.

C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
[Crossref]

Tang, S.

S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015).
[Crossref]

Tu, D.

S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015).
[Crossref]

Veeraraghavan, A.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3D scanning in the presence of global illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 713–720.
[Crossref]

Wang, X.

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

Wang, Y.

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

Wöhler, C.

S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013).
[Crossref]

Wu, F.

Xiong, Z.

Xu, Y.

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009).
[Crossref] [PubMed]

Yang, Y. H.

X. Chen and Y. H. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recognit. 48(1), 220–230 (2015).
[Crossref]

Zhang, M.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Zhang, X.

S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015).
[Crossref]

Zhang, Y.

Zhao, H.

H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017).

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

Zhou, Y.

H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017).

Zuo, C.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Appl. Opt. (1)

IEEE Trans. Pattern Anal. Mach. Intell. (1)

Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, “Robust active stereo vision using Kullback-Leibler divergence,” IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 548–563 (2012).
[Crossref] [PubMed]

IEEE Trans. Vis. Comput. Graph. (1)

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong interreflections,” IEEE Trans. Vis. Comput. Graph. 15(3), 465–480 (2009).
[Crossref] [PubMed]

ISPRS J. Photogramm. Remote Sens. (1)

S. Herbort, B. Gerken, D. Schugk, and C. Wöhler, “3D range scan enhancement using image-based methods,” ISPRS J. Photogramm. Remote Sens. 84, 69–84 (2013).
[Crossref]

Opt. Eng. (1)

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Opt. Express (1)

Opt. Lasers Eng. (4)

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015).
[Crossref]

Pattern Recognit. (1)

X. Chen and Y. H. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recognit. 48(1), 220–230 (2015).
[Crossref]

Proc. SPIE (3)

C. Munkelt, P. Kühmstedt, M. Heinze, H. Süße, and G. Notnia, “How to detect object-caused illumination effects in 3D fringe projection,” Proc. SPIE 5856, 632–639 (2005).
[Crossref]

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005).
[Crossref]

H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” Proc. SPIE 10458, 104581A (2017).

Other (10)

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM, 2015), p. 35.
[Crossref]

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3D scanning in the presence of global illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 713–720.
[Crossref]

V. Couture, N. Martin, and S. Roy, “Unstructured light scanning to overcome interreflections,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 1895–1902.

N. Martin, V. Couture, and S. Roy, “Subpixel scanning invariant to indirect lighting using quadratic code length,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 1441–1448.
[Crossref]

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

N. Pears, Y. Liu, and P. Bunting, 3D Imaging, Analysis and Applications (Springer-Verlag, 2012), Chap. 2.

D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 FPP in the presence of interreflections. (a) Typical setup of a FPP system; (b) Fringe patterns with strong glossy interreflections between concave surfaces (blue solid line: direct light path; red dashed line: indirect light path).
Fig. 2
Fig. 2 Framework of the proposed method.
Fig. 3
Fig. 3 Determining the rows required to perform epipolar imaging. (a) Pixels that satisfy p r; (b) Pixels that satisfy q r; (c) Pixels that satisfy (p q) r; (d) Mask after morphological operations. The gray band labels the rows required to perform epipolar imaging.
Fig. 4
Fig. 4 Epipolar imaging in the FPP system. The projected epipolar line becomes two lines in the captured image due to interreflections, and the line of direct light can be identified by epipolar constraint.
Fig. 5
Fig. 5 The epipolar imaging procedure of a single epipolar line. The original projected pattern I ( x , y ) is multiplied by the projector mask M i ( x , y ) to obtain the masked projected pattern I ˜ i ( x , y ) , and the original captured image I i ( x , y ) is multiplied by the camera mask M i ( x , y ) to obtain the masked image I ˜ i ( x , y ) in which interreflections are eliminated.
Fig. 6
Fig. 6 Schematic diagram of epipolar rectification and inverse epipolar rectification. I and IV are the image planes after epipolar rectification; II and III are image planes before epipolar rectification of the projector and the camera, respectively. Original projected pattern is multiplied by the horizontal epipolar mask to acquire masked projected pattern (I); Inverse epipolar rectification is performed to obtain the actual projected pattern (I−II); The projected pattern is captured by the camera (II−III); Epipolar rectification makes the projected epipolar line in the captured image become horizontal again (III−IV).
Fig. 7
Fig. 7 Speckle patterns projected on the concave surfaces. (a) Full-field projected speckle pattern; (b) Epipolar imaging speckle pattern.
Fig. 8
Fig. 8 Stereo matching results with different projected speckle patterns. (a) Difference between captured positive and negative speckle patterns; (b) Original projected speckle pattern (0°); (c) Stereo matching result with (b); (d) Right-skewed projected speckle pattern (25°); (e) Left-skewed projected speckle pattern (−25°); (f) Stereo matching result with (b), (d), and (e) simultaneously.
Fig. 9
Fig. 9 Regional fringe projection and imaging of a single region; the original fringe pattern I ϕ ( x , y ) is multiplied by the regional projector mask M ( x , y ) to obtain the regional masked fringe pattern I ˜ ϕ ( x , y ) , and the original captured fringe image I ϕ ( x , y ) is multiplied by the regional camera mask M ( x , y ) to obtain the regional masked fringe image I ˜ ϕ ( x , y ) .
Fig. 10
Fig. 10 Measurement system and measured objects. (a) Measurement system formed by a camera and a projector; (b) Aluminium alloy workpiece with concave surfaces; (c) Ceramic bowl; (d) Etalon formed by two steel gauge blocks.
Fig. 11
Fig. 11 3D shape measurement by fringe projection. (a) HDR vertical fringes; (b) Unwrapped phase retrieved by vertical fringes; (c) Measurement result that contains wrong data caused by interreflections; (d) HDR horizontal fringes; (e) Unwrapped phase retrieved by horizontal fringes; (f) Measurement result after excluding the wrong data by horizontal fringes.
Fig. 12
Fig. 12 Epipolar imaging with speckle patterns. (a) Captured image of a single epipolar line; (b) Fused speckle pattern by epipolar imaging (only project the rows contain interreflections); (c) Measurement result of epipolar imaging (blue region).
Fig. 13
Fig. 13 Regional fringe projection and imaging. (a) Camera mask of region 1; (b) Camera mask of region 2; (c) Fused HDR fringe image of two regions; (d) Captured fringe image of region 1; (e) Captured fringe image of region 2; (f) Measurement result of regional fringe projection.
Fig. 14
Fig. 14 Measurement results of a ceramic bowl. (a) HDR fringes in FPP; (b) Fused speckle pattern by epipolar imaging (only project the rows contain interreflections); (c) Segmented regions by normal clustering (one gray level represents one region); (d) Measurement result of FPP; (e) Measurement result of epipolar imaging (blue region); (f) Measurement result of regional fringe projection.
Fig. 15
Fig. 15 Error distribution of fitting two planes. (a) FPP; (b) Epipolar imaging with speckle patterns; (c) Regional fringe projection.

Tables (1)

Tables Icon

Table 1 Fitting plane error comparison (unit: mm)

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I i ( x , y ) = I A ( x , y ) + I B ( x , y ) cos [ ϕ ( x , y ) + π 2 i ] ,
ϕ ( x , y ) = arc tan I 3 ( x , y ) I 1 ( x , y ) I 0 ( x , y ) I 2 ( x , y )
Φ 1 ( x , y ) = ϕ 1 ( x , y ) + 2 π round { 1 2 π [ λ 2 λ 2 λ 1 ( ϕ 1 ( x , y ) - ϕ 2 ( x , y ) ) ϕ 1 ( x , y ) ] } ,
I 1 ( x , y ) = I A 1 ( x , y ) + I B 1 ( x , y ) cos [ ϕ 1 ( x , y ) ] ,
I 2 ( x , y ) = I A 2 ( x , y ) + I B 2 ( x , y ) cos [ ϕ 2 ( x , y ) ] = I A 2 + I B 2 cos [ ϕ 1 ( x , y ) + Δ ϕ ( x , y ) ] ,
I s ( x , y ) = I A s ( x , y ) + I B s ( x , y ) cos [ ϕ s ( x , y ) ] ,
ϕ s ( x , y ) = ϕ 1 ( x , y ) + Δ ϕ s ( x , y ) ,
tan Δ ϕ s ( x , y ) = I B 2 ( x , y ) sin [ Δ ϕ ( x , y ) ] I B 1 ( x , y ) + I B 2 ( x , y ) cos [ Δ ϕ ( x , y ) ] ,
{ p : | Φ c ( x , y ) Φ p ( x , y ) | 2 π λ 1 > ε 1 q : i = 1 n [ d i ( x , y ) 1 n i = 1 n d i ( x , y ) ] > ε 2 r : I B ( x , y ) > ε 3 ,
I ( x , y ) = i = 1 n I ˜ i ( x , y ) = i = 1 n I ( x , y ) M i ( x , y ) ,
I ˜ ( x , y ) = i = 1 n I ˜ i ( x , y ) = i = 1 n I i ( x , y ) M i ( x , y ) ,
l = F p ,
l = F T p ,
NCC ( x , y , d ) = u , v [ I c ( u , v ) I ¯ c ] [ I p ( u d , v ) I ¯ p ] { u , v [ I c ( u , v ) I ¯ c ] 2 u , v [ I p ( u d , v ) I ¯ p ] 2 } 1 / 2 ,
I ˜ ϕ ( x , y ) = I ϕ ( x , y ) M ( x , y ) ,
I ˜ ϕ ( x , y ) = I ϕ ( x , y ) M ( x , y ) ,
M i ( x , y ) = { 0 , n i , n ( x , y ) θ 1 , n i , n ( x , y ) < θ ,
n ( x , y ) = ( P left P right ) × ( P up P down ) ( P left P right ) × ( P up P down ) ,
M i [ x d ( x , y ) , y ] = M i ( x , y ) ,
Φ region ( x , y ) = ϕ region ( x , y ) + 2 π round { Φ projector [ x d ( x , y ) , y ] ϕ region ( x , y ) 2 π } ,

Metrics