Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Absolute phase measurement in fringe projection using multiple perspectives

Open Access Open Access

Abstract

A technique for absolute phase measurement in fringe projection for shape measurement is presented. A standard fringe projection system is used, comprising a camera and a projector fixed relative to each other. The test object is moved to different orientations relative to the fringe projection system. Using the system calibration parameters, the technique identifies homologous surface areas imaged from different perspectives and resolves the 2π phase ambiguity between them simultaneously. The technique is also used to identify regions of the phase maps corresponding to discrete surfaces on the object. The methods described are suitable for automatic shape measurement with a lightweight fringe projection probe mounted to a coordinate measuring machine.

© 2013 Optical Society of America

Introduction

The 2π phase ambiguity that arises when measuring the shape of surfaces with height discontinuities greater than the period of a projected fringe pattern is well-known. The fringe pattern, usually comprising parallel fringes with a sinusoidal profile, is projected onto an object and imaged by a camera. The phase of the fringe pattern recorded at each camera pixel encodes the height of the object, and can be measured accurately to approximately 1% of the imaged fringe period with phase stepping techniques [1, 2]. Spatial unwrapping of the phase map can fail due to the periodicity of the projected pattern if the height difference between adjacent pixels requires a phase increment greater than 2π to be added to the measured phase. If the height discontinuities produce isolated areas in the phase map, then the absolute phase must be known for at least one point in each area to recover the surface height.

The 2π ambiguity can be resolved for a single phase measurement by marking one or more parts of the projected pattern, for example the ‘zero order’ fringe [3]. This approach fails for isolated surfaces and steps which do not coincide with the zero order fringe. Fitts [4] embedded pseudo-random intensity perturbations in the projected fringe pattern, on the scale of the noise in the system,. The distribution of the perturbations varied across the projected pattern, enabling a statistical analysis of the noise in the recorded images to identify individual fringes. However, the noise in the recorded images depends on the surface being measured, which requires the embedded perturbations to be matched to the surface type. The 2π ambiguity can be resolved by recording multiple images from a single perspective, projecting a sequence of gray-code patterns [5] or fringe patterns with different pitches [6] to generate a unique code at each pixel. These approaches require a projector capable of producing different patterns, such as interchangeable slides or a programmable digital projector.

A number of authors have reported techniques for absolute shape measurement by projecting a pattern onto an object viewed from a number of different perspectives and using photogrammetric techniques to measure the surface [710]. Reich et al. [7] used a beat-frequency technique to resolve 2π ambiguity in the phase of a fringe pattern viewed from two perspectives. The unambiguous phase provided corresponding points on a continuous, smooth surface for measurement using photogrammetry. Scharstein and Szeliski [8] used a similar approach projecting either Gray-code or sinusoidal fringes to uniquely encode pixels viewing the same object points from different perspectives. Brauer-Burchardt et al. [9] and Ishiyama et al. [10] extended this idea by using unwrapped phase and the epipolar geometry of the system to identify corresponding points in multiple camera images, which were again measured using photogrammetry. Remaining points on the surfaces were measured from the phase maps, using the photogrammetrically measured points to resolve the 2π ambiguity. Both techniques required either multiple cameras, or the ability to move the camera independently of the projector, so that the same fringe pattern on the object’s surface could be imaged from multiple perspectives.

The technique described in this paper resolves 2π ambiguities using multiple perspectives. It does not require a series of fringes at different periods to be projected. Only a single camera and projector are required, fixed relative to each other. In our technique, the test object is moved to different orientations relative to the fringe projection system. The phase measured in an area from one view is then re-projected into the phase maps of the other views for a range of integer 2π phase offsets. The correct 2π offset can be determined by analyzing the phase in the re-projected areas in the other views. We show that by repeating this process in a systematic way for all areas in all views, homologous surface areas imaged from different perspectives can be identified and the 2π phase ambiguity between them resolved, simultaneously.

The technique is best described with reference to experimental results. Therefore, the next section describes the system used to collect data using an object that is moved relative to the camera and fringe projector. Subsequent sections describe the technique to resolve 2π ambiguities and how it can then be used to determine areas of the phase map corresponding to isolated surfaces on the object surface. Finally, we discuss the alternative approach of moving the camera and fringe projector as a unit around the fixed object.

Experimental system

The shape measurement system has been described previously [11]. The fringe projection system comprised a camera (PointGrey FLEA-HIBW, 1024 x 768 pixels, 8 mm fixed focal length megapixel lens) and a fringe projector (Hewlett-Packard VP6311 digital video projector) fixed relative to each other in a standard off-axis arrangement. Calibration targets and test objects were moved relative to the fixed projector and camera in order to demonstrate the principles of the technique. The calibration targets and test objects were mounted to a two-axis Renishaw REVO® articulating head that in turn was mounted on a three-axis Mitutoyo Crysta Apex 9106 CMM. The CMM was driven using a Renishaw UCC2TM controller.

System calibration has also been described previously [11] and requires three steps. Firstly the CMM was calibrated following the standard ISO procedure which resulted in the accurate and traceable position of the tip of a calibrated touch probe attached to the two-axis head on the three-axis CMM to the order of 1µm in the measurement volume. Secondly, the camera was calibrated: its intrinsic calibration parameters (principal distance and terms describing lens distortions) were calculated by taking multiple images of a calibration target; its extrinsic parameters (position and orientation of the camera) were found in CMM coordinates by recording images of a custom-made calibrated touch probe with a white spherical stylus ball and black shaft placed at different positions throughout the camera’s field of view. The root mean square (rms) error between the measured position of the probe tip in the camera image and the position calculated from its CMM coordinates and the camera calibration parameters was 0.4 pixels for the measurement volume. Finally, the phase measured by the fringe projection system was related to all points in its measurement volume by measuring the phase from a plane calibration surface [3, 12]. Following these three calibration steps, the rms uncertainty for the phase to position within the measurement volume was equivalent to 1.5% of the projected fringe period, corresponding to 60 μm rms in position [11].

Measurements were made for the test object shown in Fig. 1(a), consisting of two cuboids mounted on a curved surface. The object was attached to the CMM. Phase-stepped fringe patterns were recorded in order to calculate wrapped phase maps of the type shown in Fig. 1(b). The phase-stepped fringe patterns could either be generated directly by the data projector, or by projecting a single fringe pattern and recording images as the object was rotated on the CMM in an arc about the camera’s perspective center [11]. The phase-stepping method is not important for the technique described in this paper to resolve 2π ambiguities. Figure 1(b) shows the phase recovered by rotating the object about the camera’s perspective center. This approach was chosen because it is consistent with an alternative implementation using a CMM-mounted optical probe moving around a stationary object that will be discussed later.

 figure: Fig. 1

Fig. 1 (a) Test object. (b) Wrapped phase map from perspective 1 with initially identified object edges marked blue. (c) and (d) Phase maps calculated from phase-stepped images in different cyclic orders, showing object edges (fixed position) and phase wrap discontinuities (moved position) between images.

Download Full Size | PDF

Figure 1(b) shows that it is possible to identify some object edges in the wrapped phase map, and hence segment the wrapped phase map into discrete areas. Object edges, and 2π discontinuities from the arctangent operation in the phase calculation from phase stepped intensity images, may be identified by the local gradient of the wrapped phase, i.e. a step change in phase between adjacent pixels above a certain threshold. The expected change of phase between adjacent pixels can be determined from the projected fringe pitch at the camera image plane. The same threshold can be applied to the entire phase map because phase is generally independent of the background lighting and the reflectance properties of the object. A threshold of π/10 radians was used for the results shown. Any pixel whose neighbor’s phase differed by more than this threshold was marked as an edge, Fig. 1(c). In order to distinguish between object discontinuities and phase wrap discontinuities, each phase map was re-calculated using the phase-stepped images in a different cyclic order. In each re-calculation, phase wrap discontinuities occurred in a different place in the phase map whilst real object discontinuities remained in the same position in the phase map. Figure 1(d) shows one recalculation of the phase map of Fig. 1(c) where phase discontinuities due to object surface discontinuities remain stationary, and those due to the phase wrap are in different positions. Only those surface discontinuities that remained marked in all of the cyclic combinations are shown in Fig. 1(b), i.e. those due to object discontinuities where the phase difference was above the chosen threshold.

Although some object edges can be identified by the above approach, it is clearly not feasible to identify all discrete object surfaces due to the 2π phase ambiguity problem. Hence we can consider that the phase map has been segmented into distinct areas that may comprise one or more discrete surfaces of the object. To emphasize this point, Fig. 2(a) shows the result of unwrapping the phase distribution of Fig. 1(b) using a standard flood-fill algorithm. Each separate area within the phase map was unwrapped from an arbitrary start point. For this particular view, some areas correspond to discrete surfaces of the object that were identified by phase discontinuities in the phase map, e.g. area 1 in Fig. 2(a) corresponding to the top surface of the smaller cuboid. Other discrete object surfaces have not been identified correctly as separate areas of the unwrapped phase map because the phase difference between adjacent pixels was within the threshold to an integer multiple of 2π, e.g. the top surface of the larger cuboid and the curved surface have merged into one area in the unwrapped phase map.

 figure: Fig. 2

Fig. 2 (a) Unwrapped phase map for perspective 1. Area 1 and regions 2, 3 and 4 show different features of the unwrapped phase (described in the text) before it is re-projected into the unwrapped phase maps from other perspectives. (b) to (d) Unwrapped phase maps from perspectives 2 3 and 4, respectively.

Download Full Size | PDF

Figure 2(b) shows an unwrapped phase map recorded with the object presented to the fringe projection system with a different orientation. For this view of the object, identifying object edges once again enabled the top surface of the smaller cuboid to be isolated to a separate area of the unwrapped phase map. Other discrete object surfaces again merged in areas of the unwrapped phase, but these areas do not necessarily correspond to those in Fig. 2(a).

Figures 2(a) and 2(b) both contain path-dependent unwrapping errors. These errors propagate from phase singularities that arise on the boundaries between discrete surfaces for low noise measurements. Therefore the areas into which the unwrapped phase is segmented may correspond to a single surface of the object, but more generally correspond to two or more discrete surfaces and may well contain path-dependent unwrapping errors that cannot be avoided with a spatial unwrapping algorithm.

In the next section we show how to resolve 2π phase ambiguities for areas of the unwrapped phase maps that correspond to a discrete surface of the object. This case is the simplest to demonstrate the principle of the new technique by which the absolute phase can be assigned based on multiple views of the object. However, as seen it Fig. 2, areas of the unwrapped phase maps do not generally correspond to a single discrete surface of the object: usually areas contain parts of two or more object surfaces. In the subsequent section, we show how the new technique can be extended to subdivide these areas of the unwrapped phase maps into smaller regions of absolute phase that correspond to single discrete surfaces, or parts of a surface, on the object.

Absolute phase measurement

In this section we describe how to resolve 2π phase ambiguities for areas of unwrapped phase maps that correspond to a discrete surface of the object. Consider area 1 in Fig. 2(a) corresponding to the upper surface of the smaller cuboid. The wrapped phase in this area was unwrapped from an arbitrary start point using a standard spatial unwrapping (flood-fill) algorithm. Each pixel in the set has an unwrapped phase ϕu, which is related to the absolute (or correct) phase ϕa by

ϕa(x,y)=ϕu(x,y)+2mπ
for some unknown integer m. If the set of pixels corresponds to a continuous surface, as assumed in this section, then m is constant for the set. There are a finite number of possible values of m, determined by the depth of the measurement volume and the period of the projected fringes. The absolute phase at a given pixel is a function of the distance from the camera to the object point being imaged. Each value of m in Eq. (1) gives a different candidate 3D point for each pixel in the area. The problem of 2π ambiguity reduces to ascertaining which of the finite number of 3D points is correct.

Figure 2(b) shows an unwrapped phase map recorded from a second perspective. In this straightforward case, the top surface of the smaller cuboid has also been identified as an area of the unwrapped phase map. The candidate 3D points constructed from the area 1 in Fig. 2(a) are re-projected into the phase map for Fig. 2(b), using the known relative positions and orientations of the object in both cases. Figure 3 shows the 3D points reconstructed for four possible values of m for area 1 in Fig. 2(a) re-projected into Fig. 2(b). In this straightforward case where both areas correspond to the same object surface, the re-projected points lie within a single area in the second perspective for m = −10, Fig. 3(b). Hence the absolute phase for both areas has been determined. For other values of m, the re-projected points occur in more than one area of the phase map, indicating that they do not correspond to the real surface of the object. In practice, the ‘degree of overlap’ is determined by analyzing the phase values in the re-projected areas, as described in the next section.

 figure: Fig. 3

Fig. 3 Perspective 2 with re-projected candidate point clouds originating from area 1 of perspective 1. Point clouds have been calculated with (a) m = −15, (b) m = −10, (c) m = −5 and (d) m = 0.

Download Full Size | PDF

The re-projection from one pixel in one perspective to the other perspective was performed to the nearest pixel. For our system, the rms uncertainty in the phase to height calibration of 60 μm corresponded to an rms uncertainty of approximately 1.5 pixel in re-projected position for the worst combination of perspectives (i.e. mutually orthogonal). The error was significantly less than this for the perspective combinations actually used. More sophisticated approaches for the re-projection, such as interpolation, were not found necessary. We chose to re-project areas of pixels rather than individual pixels to reduce sensitivity to noise. Intensity noise and calibration error cause the 3D points calculated for corresponding points from different perspectives to be unequal. In theory, it might be possible to construct candidate 3D points from every pixel in each perspective, and determine the correct points by analyzing local similarity statistics over the 3D volume. This approach is computationally expensive and the choice of a measure of similarity between 3D point clouds or surfaces is not obvious [13].

Absolute phase measurement extended to areas containing two or more discrete surfaces

It is apparent from Fig. 2 that areas of the unwrapped phase maps may contain more than one discrete surface of the object. There is no single value of m that will re-project all the pixels from such an area in one perspective into a single area in the phase map for any other perspective. Therefore searching for an unambiguous overlap in the re-projected areas, as described in the previous section, will not work. However, the same absolute phase measurement approach can be extended with the objective of further sub-dividing each area of the phase map into distinct regions, where each region corresponds to an individual surface (or part of an individual surface) of the object.

Figures 2(a) and 2(b) showed unwrapped phase maps for the object recorded from perspectives 1 and 2, respectively. Area 1 from perspective 1 was re-projected into perspective 2 in the previous section. Figures 2(c) and 2(d) now introduce unwrapped phase maps for two further perspectives of the object, perspectives 3 and 4 respectively. Areas and regions from perspective 1 will be re-projected into perspectives 2, 3 and 4. By analyzing the re-projected phase we show that areas of the phase maps can be further divided into regions corresponding to individual object surfaces for which the absolute phase can be determined. The complete shape of the object is then recovered by combining the absolute phase for areas and regions of the phase maps.

Multiple re-projections lie in a single area of the unwrapped phase

Consider the 3D points from area 1 from perspective 1, Fig. 2(a), re-projected into the phase maps corresponding to perspectives 3 and 4, Figs. 2(c) and 2(d) respectively. For perspectives 3 and 4, the top surface of the smaller cuboid was not identified as a separate area of the phase maps because the wrapped phase across part of the surface discontinuity was continuous. Therefore, when area 1 is re-projected for different values of m, the candidate 3D point sets move across the phase map but lie in a single area of unwrapped phase for more than one value of m. In this case, the correct value of m can be determined by analyzing the phase recorded in the re-projected region.

For each pixel in area 1, the expected value of the re-projected absolute phase, ϕe, at the nearest pixel in perspectives 3 and 4 can be calculated. The difference, ϕdiff=ϕeϕu, between the expected and measured phase is then calculated for that nearest re-projected pixel. Since ϕu is the unwrapped phase, ϕdiff/2π is a 2π-integer adjustment for the re-projected perspective. Obviously due to noise and re-projection errors ϕdiff/2π is not exactly an integer. Hence, the standard deviation in ϕdiff across all the points in the re-projection of area 1 into a different perspective measures how similar the expected and measured phase are for the re-projected area. The smallest standard deviation in ϕdiff calculated over all pixels in the re-projected area indicates the correct value of m, within the measurement uncertainty of the system. Prior to calculating ϕdiff, any path-dependent phase unwrapping errors must be removed from the unwrapped phase ϕu in the re-projected area. A convenient method to achieve this is simply to unwrap ϕu in the re-projected area.

Figures 4(a) and 4(b) show the standard deviation in ϕdiff for area 1 re-projected into perspectives 3 and 4 with different values of m. In both cases, the lowest standard deviation in ϕdiff occurs for m = −10, in agreement with the previous section. For the re-projection into perspective 3, Fig. 4(a), a smooth transition in the standard deviation with m is observed, because the unwrapped phase around the top surface of the smaller cuboid is relatively uniform. For the re-projection into perspective 4, Fig. 4(b), the variation in the standard deviation is more erratic because the unwrapped phase around the top surface contains larger variations in phase. The re-projected phase for m = −5, −6, and −7 did not coincide with any measured phase in perspective 4 and no standard deviation in ϕdiff could be calculated.

 figure: Fig. 4

Fig. 4 Standard deviation in difference between expected and measured phase, for area 1 from perspective 1 re-projected into (a) perspective 3 and (b) perspective 4. Error bars indicate estimated measurement uncertainty, ± 3ε/√N.

Download Full Size | PDF

The error bars in Fig. 4 were determined from the measured rms phase uncertainty at a single pixel, ε ≈0.4 radians. The principal contributions to this uncertainty were 0.09 radians rms uncertainty from the phase to height calibration (corresponding to approximately 60 μm in height) and 0.3 radians rms uncertainty from the re-projection between perspectives (corresponding to the rms uncertainty of approximately 1.5 pixel in re-projected position for the worst combination of perspectives and the fringe pitch of ~30 pixels at the camera image plane). For a re-projected area comprising N pixels, the expected rms uncertainty in any measurement is approximately ε/N for large N, leading to error bars of ±3ε/N marked at each point in Fig. 4. For Fig. 4(a) the minimum standard deviation in ϕdiff at m = 10 is within the error bounds for m = 9 and m = 11, i.e. the minimum at m = 10 is within 6ε/N of the next smallest values. Hence for this combination of perspectives, the choice of m = 10 for area 1 is ambiguous to within the measurement uncertainty, and another perspective should be used. For Fig. 4(b) the minimum standard deviation in ϕdiff exceeds 6ε/N from the next smallest value and so is unambiguous within the measurement uncertainty. The measurement also reveals that area 1 from perspective 1 projected into a smaller region of the unwrapped phase map in perspective 4, i.e. it will be necessary to divide that area of the unwrapped phase map into regions that correspond to separate surfaces of the object. Hence an iterative process becomes apparent, where unwrapped areas are divided into regions corresponding to separate surfaces and more perspectives are acquired until the absolute phase is unambiguously and consistently defined for all areas and regions of the unwrapped phase maps. This iterative process is discussed briefly later.

Regions not corresponding to complete surface features

So far, area 1 has been considered, which is an area of an unwrapped phase map corresponding to a complete surface feature in at least one perspective. Here we show that the technique also removes 2π ambiguities for general regions of the unwrapped phase maps that correspond to part of a surface of the object. Such regions may arise when a small area or region of unwrapped phase in one perspective is projected into a different perspective where a larger area of the same surface can be seen. As an example of this, regions 2 and 3 of the unwrapped phase maps of perspective 1, Figs. 2(a), corresponding to parts of the larger curved surface of the object, were manually selected. Additionally, region 2 contains a path-dependent phase unwrapping error when projected into perspective 4, Fig. 2(d). Region 3 also contains a path-dependent phase unwrapping error in the original region in perspective 1, as well as in perspectives 2 and 3. As discussed previously, these path-dependent phase unwrapping errors arise due to singularities in the phase where two distinct surfaces cannot be correctly identified by spatial phase unwrapping alone.

The difference between the expected and measured phase ϕdiff was calculated for all pixels in regions 2 and 3 re-projected into the unwrapped phase maps for perspectives 2, 3 and 4 for the permitted range of m values. Figures 5(a) and 5(b) show the variation in the standard deviation in ϕdiff for different values of m. For region 2, an unambiguous value of m = −2 was determined from each of the other perspectives, Fig. 5(a). Similarly for region 3, an unambiguous value of m = −4 was determined from each other perspective, Fig. 5(b). In both cases, the lowest standard deviations in ϕdiff were consistent between all three perspectives and unambiguous within the estimated uncertainty for each region.

 figure: Fig. 5

Fig. 5 (a) to (c) Standard deviation in the difference between expected and measured phase for regions of interest 2 to 4 in perspective 1, Fig. 2(a), respectively. Error bars indicate the estimated measurement uncertainty, ± 3ε/√N.

Download Full Size | PDF

Region containing an unidentified surface discontinuity

Finally, we demonstrate the behavior of the technique if an area or region that is re-projected into another perspective contains a surface discontinuity. Region 4 in perspective 1 was manually selected to include a surface discontinuity that was not detected by the initial segmentation of the phase map, Fig. 2(a). The surface discontinuity was not correctly identified by spatial phase unwrapping in any of the unwrapped phase maps for perspectives 2, 3 and 4. The difference between the expected and measured phase ϕdiff was calculated for all pixels for region 4 re-projected into the unwrapped phase maps for perspectives 2, 3 and 4 for the permitted range of m values. Figure 5(c) shows the variation in the standard deviation in ϕdiff for different values of m. The individual minima for each perspective appear conclusive from their error bars, but each minimum occurs at a different value of m for each perspective because of the undetected discontinuity in the region. Furthermore, the minima were greater than the expected expected rms phase uncertainty at a single pixel of 0.4 radians, in contrast with the other regions. This result indicates the need to record a phase map from a further perspective from which the top surface of the larger cuboid could be identified.

Discussion

Figure 6 shows the final object shape reconstructed using the technique described, viewed from perspective 1. Data from a total of 6 perspectives were used, i.e. the perspectives shown in Fig. 2 plus two further perspectives, resulting in approximately one million measured points on the object surface. For each perspective, each set of 5 phase-stepped images were acquired in approximately 2.5 seconds, allowing time for the CMM to settle after rotating the object about the camera’s perspective center. The time required for phase calculation, identifying surface discontinuities and spatial phase unwrapping is negligible. Implementing the technique described in the paper, i.e. initial phase map segmentation into areas, re-projecting areas from one perspective into the others, sub-dividing areas into regions and determining consistent absolute phase for all areas and regions required approximately 3 minutes and required no user intervention to obtain Fig. 6. A full description of the automated iterative implementation is rather tedious and does not illuminate the new technique described in this paper. In general, starting with any perspective, the iterative implementation applies the technique in turn to areas of pixels identified by the initial phase map segmentation. Areas, and subsequently regions, comprising the smallest number of pixels are considered first.

 figure: Fig. 6

Fig. 6 Height map (absolute phase) obtained automatically using the techniques described herein.

Download Full Size | PDF

The experimental system in this paper used a fixed camera and fringe projection unit and moved the object on a coordinate measuring machine (CMM). Clearly the technique is also suitable for a non-contact probe, comprising a camera and a projector fixed relative to each other, that is moved around the object on a CMM. Such a probe must be compact, lightweight and contain no moving or delicate parts due to the high accelerations (up to 20 ms−2) experienced by a CMM. Renishaw have built compact and rugged CMM-mounted prototype probes of this type that measure free-form surfaces whilst maintaining the intrinsic accuracy of both the mechanical and optical systems. These monolithic probes project a single period fringe pattern, because the projection of multiple fringe patterns is not feasible using interchangeable slides (calibrated internal moving components) or a programmable data projector (too heavy to be attached to a 5-axis CMM where weight distribution is an important consideration). Miniature data projectors do not currently have the resolution or brightness required. The probe, comprising one camera and one projector, is moved relative to the object. Rotation of the probe about the camera’s perspective center is used to generate phase stepped images [11]. Phase maps recorded from multiple perspectives of the object are used to resolve 2π ambiguities, as described in this paper. Alternatively, the technique could be implemented with a system of multiple fixed cameras and projectors. It is expected that full-field, non-contact optical probes will enable free-form surfaces to be measured accurately and quickly in order to increase the speed of measurement of freeform surfaces compared to a traditional touch-tip probe. Free-form surfaces identified by the optical probe can be measured to high accuracy with a scanning touch-probe guided by the optical measurement.

With real-time processing, it will be possible to automatically detect whether or not more perspectives of the object being measured are necessary in order to completely measure the object. Path planning software will automatically calculate a suitable location to acquire the next image. In both cases the number of perspectives could be optimized.

Conclusions

A technique to resolve 2π ambiguity in phase maps has been demonstrated. Unlike previously reported techniques, automatic measurement can be achieved using a single camera and projector that are fixed relative to each other, projecting a single fringe pattern. Initial segmentation of the phase maps was achieved using a simple but novel technique to identify edges on an object by processing fringe images in different orders [14, 15]. The advantage of this technique over traditional image processing is that the phase variations are used to detect edges so performance is less dependent on background illumination variations and reflectivity properties of the object. The phase measured in an area from one view was then re-projected into the phase maps of the other views for a range of integer 2π phase offsets. The correct 2π offset was determined by analyzing the phase in the re-projected areas in the other views. By repeating this process in a systematic way for all areas in all views, homologous surface areas imaged from different perspectives were identified and the 2π phase ambiguity between them resolved, simultaneously. It was shown that the same technique can aid further image segmentation by identifying regions containing edges or steps that have not been identified by spatial phase unwrapping. The techniques reported may be useful in a variety of applications, in particular for a compact, lightweight and robust fringe projection probe with no moving parts mounted on a coordinate measuring machine [16, 17].

Acknowledgments

This project was part-funded by the Engineering and Physical Sciences Research Council [grant numbers GR/T11289/01 and GR/S12395/01]. Andrew Moore acknowledges the support of AWE through its William Penney Fellowship scheme.

References and links

1. G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Opt. 36(19), 4463–4472 (1997). [CrossRef]   [PubMed]  

2. S. Zhang and S.-T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [CrossRef]   [PubMed]  

3. M. Reeves, A. J. Moore, D. P. Hand, and J. D. C. Jones, “Dynamic shape measurement system for laser materials processing,” Opt. Eng. 42(10), 2923–2929 (2003). [CrossRef]  

4. J. M. Fitts, “Hidden change distribution grating and use in 3D moire measurement sensors and CMM applications,” US Patent 5319445 (1994).

5. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]   [PubMed]  

6. J. M. Huntley and H. O. Saldner, “Shape measurement by temporal phase unwrapping: comparison of unwrapping algorithms,” Meas. Sci. Technol. 8(9), 986–992 (1997). [CrossRef]  

7. C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000). [CrossRef]  

8. D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03), (IEEE Computer Society, 2003), pp. I195–I202. [CrossRef]  

9. C. Brauer-Burchardt, C. Munkelt, M. Heinze, P. Kunmstedt, and G. Notni, “Phase unwrapping in fringe projection systems using epipolar geometry,” in Advanced Concepts for Intelligent Vision Systems, LNCS 5259, 422–432 (2008).

10. R. Ishiyama, S. Sakamoto, J. Tajima, T. Okatani, and K. Deguchi, “Absolute phase measurements using geometric constraints between multiple cameras and projectors,” Appl. Opt. 46(17), 3528–3538 (2007). [CrossRef]   [PubMed]  

11. Y. R. Huddart, J. D. R. Valera, N. J. Weston, T. C. Featherstone, and A. J. Moore, “Phase-stepped fringe projection by rotation about the camera’s perspective center,” Opt. Express 19(19), 18458–18469 (2011). [CrossRef]   [PubMed]  

12. A. J. Moore, R. McBride, J. S. Barton, and J. D. C. Jones, “Closed-loop phase stepping in a calibrated fiber-optic fringe projector for shape measurement,” Appl. Opt. 41(16), 3348–3354 (2002). [CrossRef]   [PubMed]  

13. R. J. Campbell and P. J. Flynn, “A survey of free-form object representation and recognition techniques,” Comput. Vis. Image Underst. 81(2), 166–210 (2001). [CrossRef]  

14. N. J. Weston, Y. R. Huddart, A. J. Moore, and T. C. Featherstone, “Phase analysis measurement apparatus and method,” International patent pending WO2009 / 024757(A1) (2008).

15. N. J. Weston, Y. R. Huddart, and A. J. Moore, “Non-contact measurement apparatus and method,” International patent pending WO2009 / 024756(A1) (2008).

16. N. J. Weston and Y. R. Huddart, “Non-contact probe,” International patent pending WO2009 / 024758(A1) (2008).

17. N. J. Weston and Y. R. Huddart, “Non-contact object inspection,” International patent pending WO2011 / 030090(A1) (2011).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 (a) Test object. (b) Wrapped phase map from perspective 1 with initially identified object edges marked blue. (c) and (d) Phase maps calculated from phase-stepped images in different cyclic orders, showing object edges (fixed position) and phase wrap discontinuities (moved position) between images.
Fig. 2
Fig. 2 (a) Unwrapped phase map for perspective 1. Area 1 and regions 2, 3 and 4 show different features of the unwrapped phase (described in the text) before it is re-projected into the unwrapped phase maps from other perspectives. (b) to (d) Unwrapped phase maps from perspectives 2 3 and 4, respectively.
Fig. 3
Fig. 3 Perspective 2 with re-projected candidate point clouds originating from area 1 of perspective 1. Point clouds have been calculated with (a) m = −15, (b) m = −10, (c) m = −5 and (d) m = 0.
Fig. 4
Fig. 4 Standard deviation in difference between expected and measured phase, for area 1 from perspective 1 re-projected into (a) perspective 3 and (b) perspective 4. Error bars indicate estimated measurement uncertainty, ± 3ε/√N.
Fig. 5
Fig. 5 (a) to (c) Standard deviation in the difference between expected and measured phase for regions of interest 2 to 4 in perspective 1, Fig. 2(a), respectively. Error bars indicate the estimated measurement uncertainty, ± 3ε/√N.
Fig. 6
Fig. 6 Height map (absolute phase) obtained automatically using the techniques described herein.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

ϕ a ( x,y )= ϕ u ( x,y )+2mπ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.