## Abstract

A three-dimensional (3D) profilometry method without phase unwrapping is proposed. The key factors of the proposed profilometry are the use of composite projection of multi-frequency and four-step phase-shift sinusoidal fringes and its geometric analysis, which enable the proposed method to extract the depth information of even largely separated discontinuous objects as well as lumped continuous objects. In particular, the geometric analysis of the multi-frequency sinusoidal fringe projection identifies the shape and position of target objects in absolute coordinate system. In the paper, the depth extraction resolution of the proposed method is analyzed and experimental results are presented.

©2009 Optical Society of America

## 1. Introduction

3D profilometry is a technology for extracting the position and depth information of 3D objects and has been researched intensively due to its importance in space recognition [1–4]. In profilometry, specific patterns are designed and projected on the target object surfaces (this process is referred to as space coding), and then, from the images of the projected patterns on 3D object surfaces, the shapes and other information related to depth and position are analyzed (this process is referred to as space decoding).

In general, profilometry is comparable with coherent interferometry. Some coherent interferometric techniques use multi-frequency patterns synthesized with a few laser sources with different wavelengths or several different carrier frequencies in order to reduce the phase ambiguity that usually appears in single frequency interferometric techniques [5, 6]. One of the disadvantages of coherent interferometric techniques is that it is difficult to obtain a degree of freedom associated with changing or controlling carrier frequencies. On the contrary, the profilometry uses the composition of projectors with incoherent light sources and a charge-coupled device (CCD) camera. The fringe patterns for 3D profiling of target objects are directly generated by the projector and the projected images of the fringe patterns on target objects captured by the CCD camera are used as measurement fringe patterns. Thus it can be said that profilometry has an advantage in easily controlling and changing the spatial frequencies of fringe patterns in comparison with coherent interferometric techniques.

In some profilometry methods, phase-shifting technique is used for reducing background noise and solving phase ambiguity problems [7, 8]. In the use of the phase-shifting technique, at least three steps of phase-shifting are necessary, which gives two relative phase bases. However, four steps of phase-shifting are usually preferred due to accuracy and reliability [9–11]. Thus, in this paper, we use four-step phase shifting method for synthesizing multi-frequency composite fringe patterns and performing the depth extraction.

On the other hand, conventional phase-shifting profilometry methods use a single spatial frequency fringe pattern [12, 13]. The spatial frequency is smaller than the maximum frequency that can be recorded on a CCD camera. In this case, phase unwrapping is indispensable for depth extraction of target objects [14]. Also, single frequency profilometry has a limitation in the dynamic range of depth extraction. If separations or discontinuities in target objects are larger than the values that can be managed by the spatial frequency of used fringe pattern, the depth extraction of the objects cannot be successful. Also if the spatial frequency is much higher than that of the prominence of target object surface, it is hard to extract fine depth map of the target object surface.

Therefore, profilometry techniques using multi-frequency fringe patterns have been actively researched. In the multi-frequency profilometry techniques, both relatively lower and relatively higher frequency fringes are used to analyze relatively large separations or discontinuities and detail depth map, respectively [15–18].

However, even in previous multi-frequency profilometry techniques, the need of phase unwrapping processes still remains although the multi-frequency profilometry technique was successful in some applications for profiling surface morphology without phase unwrapping [19].

In this paper, a novel 3D profilometry method without phase unwrapping is proposed. The key factors of the proposed profilometry method are the use of composite projection of multi-frequency and four-step phase-shift sinusoidal fringe patterns and its geometrical analysis. In most conventional profilometry techniques, only relative depth between separate objects can be measured [20, 21]. However, without phase unwrapping process, the proposed method provides the depth and position information of target objects in the absolute coordinate system with the geometric analysis regarding the positions and field-of-views of a projector and a CCD camera.

In Section 2, the proposed system geometry for depth extraction in the absolute coordinate system is described. The resolution of depth extraction of the system with a projector and a CCD camera with finite resolution is analyzed. In Section 3, the proposed depth extraction method is elucidated. The construction of multi-frequency and four-step phase-shift sinusoidal fringe patterns and its use for practical depth extraction are addressed in detail. In Section 4, experimental results are presented. The feasibility of extracting the depth of multiple objects with large discontinuity is shown. In Section 5, concluding remarks are provided.

## 2. System geometry for depth extraction in the absolute coordinate system

In this section, system geometry for depth extraction in the absolute coordinate system is addressed. Figure 1 shows the geometry of the profilometry system represented in two-dimensional *x*-*z* coordinate system. Basically the profilometry system is composed of a projector, a CCD camera and target objects. Sinusoidal fringe pattern with a single frequency is projected by the projector to the target objects. And the CCD camera captures and saves the image of the target objects with fringe pattern on their surfaces. This projection and capture process is repeated for several different frequency fringe patterns. The meaning and objective of this measurement process will be detailed in next section.

The center of imaging lenses of the projector and the CCD camera are referred to as the reference points of the projector and CCD camera, respectively. In Fig. 1, the reference points of the projector and CCD camera are located on (*x _{PRJ}*,

*z*) and (

_{PRJ}*x*,

_{CCD}*z*), respectively. The projector and the CCD camera have finite angular fields of view. Let the angular field of views of the projector and the CCD camera be denoted by Θ

_{CCD}_{PRJ}and Θ

_{CCD}, respectively. As indicated in Fig. 1, it is not necessary that the image planes of the projector and the CCD camera should be placed on the same plane. The normal vectors of the image planes of the projector and the CCD camera are respectively rotated by specific angles so that the regions inside the fields of view of the projector and the CCD camera have a intersection region and this intersection region covers the target objects to be observed. In Fig. 1, the amount of rotation tilt angle of the projector and the CCD camera are indicated by

*φ*and

_{PRJ}*φ*, respectively.

_{CCD}*φ*(

_{PRJ}*φ*) is the angle between the left boundary of the field of view of the projector (CCD camera) and the

_{CCD}*z*-axis.

Let us assume that the projector illuminates a single bright line image focused on a point on the object surface (*x _{OBJ}*,

*z*). The position of the line in the projection image is straightforwardly converted to the local illumination angle,

_{OBJ}*θ*, and the position of the line in the image captured by the CCD camera is also corresponding to the detection angle,

_{Local}*θ*.

_{Detect}These geometric parameters are used in extracting the depth of objects in the absolute coordinate system. In the rotated coordinates, where the axis of *z*′ is parallel to the left boundary of the viewing region of the CCD camera, the bright line position, (*x*′_{OBJ}, *z*′_{OBJ}) on the object surface, seen by the CCD camera is located on the line give by

A point (*x*′,*z*′) in the rotated coordinate system is transformed into (*x*,*z*) in the absolute coordinate system by rotation transform by the tile angle *φ _{CCD}* as

With all geometric parameters, including the reference points (*x _{CCD}*,

*z*) and (

_{CCD}*x*,

_{PRJ}*z*), known, the position (

_{PRJ}*x*,

_{OBJ}*z*) can be calculated. Therefore,

_{OBJ}*x*and

_{OBJ}*z*have the relation as

_{OBJ}From Eqs. (3a) and (3b), *x _{OBJ}* and

*z*are obtained as

_{OBJ}The absolute coordinate position of a point, (*x _{OBJ}*,

*z*), on the surface of target object is evaluated from the geometric parameters in absolute coordinates. In the above analysis, it is assumed that the point on the object surface is indicated by the bright line produced by the projector. In principle, if we scan the line image over the surfaces of the target objects and analyze the absolute coordinates of all points simultaneously, we would obtain the total depth information of the target objects without auxiliary process such as phase unwrapping algorithm.

_{OBJ}The resolution of depth extraction in this profilometry scheme is analyzed. Figure 2 shows the intersectional region of the fields of view of a projector and a CCD camera. The depth of objects placed in this region can be extracted with the stated method. In practice, the projector and the CCD camera have finite resolutions of 1024×768 pixels and 1280×960 pixels, respectively. Accordingly, the field of view is quantized as shown in Fig. 2. In Fig. 3(a), the relation between spatial resolution and angular resolutions of both CCD camera and projector is shown. Since the projector is positioned apart from the CCD camera, the minimum resolvable voxel (volume pixel) has a rhombic shape approximately. Let the vertices of the (*m*,*n*)th voxel are indexed by (*x _{mn}*,

*z*), (

_{mn}*x*

_{mn+1},

*z*

_{mn+1}), (

*x*

_{m+1n},

*z*

_{m+1n}) and (

*x*

_{m+1n+1},

*z*

_{m+1n+1}). The area of this voxel is given by

$$+S({x}_{m+1n+1},{z}_{m+1n+1};{x}_{m+1n},{z}_{m+1n};{x}_{\mathrm{mn}+1},{z}_{\mathrm{mn}+1}).$$

Here, *S*() is Heron’s formula representing the triangular area with three points given by

The contour map of a resolvable voxel in the intersectional region is shown in Fig. 3(b). The minimum resolution is about 0.7*mm*
^{2} around the point 1500*mm* away from the reference point of the projector. The 3D discontinuous objects used in the first experiment are placed from 2300*mm* to 2800*mm* from the reference point of the projector. Hence, the resolution ranges of the first experiment are from 4*mm*
^{2} to 8*mm*
^{2} approximately.

## 3. Depth extraction using multi-frequency and four-step phase-shift sinusoidal fringe composite

In Section 2, it is shown that if 3D objects can be scanned by a bright line, we conduct 3D profiling of the target objects with simultaneous analysis of depth and position. In practice, the direct scanning of bright line over target objects is inefficient since moving picture of line image scanning over the target objects is necessary.

In this paper, an efficient method for realizing the above principle with multi-frequency sinusoidal fringe patterns is devised. In this section, the devised depth extraction analysis method that is effectively equivalent to the above stated line image scanning but more efficient than it is described.

Figure 4 shows the schematic of the devised 3D profilimetry method. Within the setup depicted in Fig. 1, a sinusoidal fringe pattern, *P _{n,p}* (

*ξ*), with a specific spatial frequency,

*f*, and a specific phase shift,

_{n}*p*, is generated as given by

where *ξ* is the lateral axis of the projector image plane. The phase-shift, *p*, is given by one of four quadrate phase-shift values as

and the spatial frequency, *f _{n}*, is chosen as the inverse exponent of integer 2,

A sinusoidal fringe pattern, *P _{n,p}* (

*ξ*), is projected onto target object located in the intersectional region of the fields of view of the projector and CCD camera. The CCD camera captures the image of the distorted fringe patterns formed on the object surface. The captured image is named by

*I*and saved for depth analysis through post data processing.

_{n,p}In the proposed method, we should take a bundle of images of distorted sinusoidal fringes with several spatial frequencies *f _{n}*. Also, it is important that for a spatial frequency

*f*, four distorted fringe images of four sinusoidal fringe patterns with quadrate phase-shift values of

_{n}*p*= 0,

*π*/2,

*π*, and 3;

*π*/2 have to be reserved. Then if we use

*N*spatial frequencies, we have to get 4

*N*pictures. The next step is numerical depth extraction of target objects from the obtained 4

*N*pictures.

Before this process, let us address a mathematical property of superposition of the raw data pictures *I _{n,p}*. Basically, the objective of using several multi-frequency fringe patterns is to equivalently realize the bright line scanning as stated in Section 2. According to the Fourier transform, a sharp bright line image can be represented by a superposition of several weighted sinusoidal fringe patterns. The reason of the need of four fringe patterns with quadrate phase-shifts is to construct an exponential Fourier basis. Since the sinusoidal fringe patterns that can be generated by the projector are inevitably positive value functions, the necessary exponential Fourier basis for a spatial frequency

*f*is constructed as follows. cos (2

_{n}*πf*) and sin(2

_{n}ξ*πf*) are given, respectively, as

_{n}ξAs a result, the exponential Fourier basis can be obtained as

Then we set a shifted pulse-shaped function *h*(*ξ*-*s*) representing the bright line image, where *s* is the lateral translation of the centered pulse-shaped function *h*(*ξ*), and find the Fourier coefficients of *h*(*ξ*-*s*), *Fr*[*h*(*ξ*;*s*)](*f _{ξ}*). If we use a finite number of Fourier harmonics, the Fourier series cannot represent the original shifted pulse-shaped function,

*h*(

*ξ*-

*s*), but it is enough to represent local illumination pattern wherein brightness around the center of the original pulse-shaped function is relatively stronger than that of other parts. Let this partially truncated Fourier series of the original pulse-shaped function

*h*(

*ξ*-

*s*) be denoted by

*P*(E; s). At this stage,

_{Local}*P*(

_{Local}*ξ*;

*s*) is represented as

where real(*t*) is the real part of a complex number *t*.

For convenience, we adopt the pulse-shaped function by a delta function with the peak position *s* given by

As a result, the local illumination image function, *P _{Local}* (

*ξ*;

*s*), is represented as

$$\phantom{\rule[-0ex]{3.2em}{0ex}}=\mathrm{real}\{{\sum}_{n}\mathrm{exp}\left(-j2\pi {f}_{n}s\right)\left\{\left[{P}_{n,0}\left(\xi \right)-{P}_{n,\pi}\left(\xi \right)\right]+j\left[{P}_{n,3\pi /2}\left(\xi \right)-{P}_{n,\pi /2}\left(\xi \right)\right]\}/2\right\}.$$

For a projector with finite resolution of 1024 × 768 pixels, the maximum spatial frequency of expressible sinusoidal fringe pattern is 2^{-10}. Thus, the available spatial frequencies for the projector with 1024×768 resolution are in the range of 2^{-10} ≤ *f _{n}* ≤ 2

^{0}. The important property of Eq. (12) associated with profilometry is that the peak shift of the local illumination function can be simply controlled by a change of

*s*, where

*s*is in the range of 0 ≤

*s*≤ 1023. Therefore, we can perform bright line scanning action with just raw images of multi-frequency sinusoidal fringes effectively by repeatedly calculating Eq. (12) with change of

*s*. Without actual scanning action of bright line illumination over target objects, we can numerically calculate the pictures of object surface illuminated by shifting bright line images,

*I*(

_{Local}*s*), as, according to the superposition principle,

As a result, without actual scanning action, with just 4*N* pictures, 1024 pictures of line scanning images of the target objects are obtained. This is a main meaning of using multi-frequency and four-step phase-shift sinusoidal fringe projection in the proposed profilometry method.

Figure 5 shows some examples of the local illumination function, *P _{Local}* (

*ξ*;

*s*), of Eq. (12). In every charts, the ξ-axis indicates the lateral pixel index of the 1024×768 projector. The

*η*-axis is the light intensity of the composite local illumination image.

In Fig. 5(a), the local illumination function composed of single spatial frequency fringe of *n* = 10 is shown. In Figs. 5(b) and 5(c), the local illumination functions composed of two sinusoidal fringes with spatial frequencies of *n* = 5 and *n* = 10 and with *n* = 9 and *n*= 10 are presented. When nine fringes with spatial frequencies of *n* = 2,3,…,and 10 are used, as indicated in Fig. 5(d), we can see that the discernible sharp peak appears around the center, *ξ* = 512. Although the differences between light intensities at the center pixel (*ξ* = 512) and other parts are small, the differences are enough to be available for local illumination and its scanning.

Next, the scanning action of the local illumination function with varying the shifting parameter *s* is demonstrated. Figure 6 shows the translation action of a few fringe patterns and the composite local illumination functions with varying the shift parameter *s*. As indicated in Eq. (12), the fringe pattern of spatial frequency *f _{n}* is shifted by the amount of 2

^{n+1}

*πs*. In Fig. 6, the first, second and third row figures show the translation of single frequency fringe patterns for

*n*= 10,

*n*= 9, and

*n*= 5, respectively. We can see peak scanning action in the first, second, and third column figures (indicated by (a), (b), and (c), respectively).

With the combination of the effective local illumination scanning method stated in this section and the geometric analysis stated in Section 2, we can perform the depth extraction of target objects in the absolute coordinate system.

## 4. Experimental results

Figure 7 shows the experiment setup composed of a projector (Mitsubishi SCD-SX90), a CCD (SONY XCD-SX90) and target objects. The first target object is two Greek head-shaped statues (Venus in the left side and Julian in the right side) which are at distances of 1800*mm* and 2300*mm* from the projector. In Fig. 7, blue solid lines and red dashed lines indicate the fields of views of the projector and the CCD, respectively.

The pictures of objects illuminated by the fringe patterns with the lowest spatial frequency and the highest spatial frequency are shown in Figs. 8(a) and 8(b), respectively. 36 pictures for sinusoidal fringe patterns of nine spatial frequencies are taken. In Fig. 8(c), the image of locally illuminated objects synthesized according to Eq. (13) is presented. In this figure, the red colored bright line illumination is perceived on the side facet of Venus. With the geometric analysis, the absolute coordinates of the bright points can be obtained. In Fig. 8(d), the movie of scanning the local illumination is attached.

Figure 9 exhibits the depth extraction results obtained in the experiment. The white parts in the images indicate parts with incorrect depth information to be avoided or improved. Since this experiment is set for the depth range from 1720*mm* to 2300*mm*, depth values beyond the range are interpreted as white parts indicating non-measurable depth. Figure 9(a) shows the depth profile obtained using a single spatial frequency fringe pattern of *n* = 10. It can be seen that several parts especially on the back head of Venus and the front head of Julian are white-colored. In Figs. 9(b) and 9(c), the depth profiles extracted using two fringe patterns of
*n* = 5,10 and *n* = 9,10, respectively, are shown. Figure 9(c) shows a more successful measure of the depth profile of almost whole object, compared with the result in Fig. 9(b), with respect to the amount of the white-colored incorrect depth region.

Referring to Fig. 9, Fig. 9(c) has more apparent difference between the sharpest peak at 512th pixel and signal level at other parts than that of the case in Fig. 9(b). Hence the local illumination with *n* =9 and *n* = 10 works better than one with *n* =5 and *n* =10 . In Fig. 9(d), the result obtained using nine sinusoidal fringe patterns of *n* = 2,3,…,and 10 is presented. This demonstrates that the depth of the discontinuous objects is measured accurately within the whole image without any white parts (incorrect depth extraction) with the local illumination synthesized with just nine sinusoidal fringe patterns.

To demonstrate the accuracy of the proposed method more vividly, another sample composed of spatially separated 20 balls with radius of 30*mm*, is chosen as shown in Fig. 10(a). The depth of 20 balls is set within the range from 1600*mm* to 2300*mm*. This target sample would show the accuracy of the proposed profilometry method clearly since the measured depth values can be compared to actual exact depth values.

Figure 10(a) represents the picture of 20 balls taken by the CCD camera. Figures 10(b) and 10(c) show the experimentally measured depth extraction of 20 balls using nine multi-frequency fringe patterns. Figure 10(c) is a perspective view of Fig. 10(b). In Table 1, the comparison of the real exact depth values and the measured depth values are presented. It successfully demonstrates that the proposed porfilometry method performs the depth extraction of the separated objects without phase unwrapping algorithm in the absolute coordinate system.

## 5. Conclusion

In conclusion, 3D profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection has been developed. The scanning of local illumination pattern necessary for the depth extraction without phase unwrapping is performed by composition of multi-frequency sinusoidal fringes with four-step phase shifting method. In this proposed profilometry, the ambiguity in depth extraction of conventional profilometry methods associated with phase unwrapping is removed and the feasibility is demonstrated with experimental results.

## Acknowledgments

This work was supported by the Korea Science and Engineering Foundation and the Ministry of Education, Science and Engineering of Korea through the National Creative Research Initiative Program (#R16-2007-030-01001-0).

## References and links

**1. **E. Stoykova, J. Harizanova, and V. Sainov, “Pattern projection profilometry for 3D coordinates measurement of dynamic scenes,” in *Three-Dimensional Television: Capture, Transmission, Display*, H. M. Ozaktas and L. Onural, ed. (Springer, 2008), pp. 85–164. [CrossRef]

**2. **J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recogn. **37**, 827–849 (2004). [CrossRef]

**3. **D. M. Meadows, W. O. Johnson, and J. B. Allen, “Generation of surface contours by Moiré patterns,” Appl. Opt. **9**, 942–947 (1970). [CrossRef] [PubMed]

**4. **H. Takasaki, “Moiré topography,” Appl. Opt. **9**, 1467–1472 (1970). [CrossRef] [PubMed]

**5. **P. Picart, E. Moisson, and D. Mounier, “Twin sensitivity measurement by spatial multiplexing of digitally recorded holograms,” Appl. Opt. **42**, 1947–1957 (2003). [CrossRef] [PubMed]

**6. **E. A. Barbosa, E. A. Lima, M. R. R. Gesualdi, and M. Muramatsu, “Enhanced multiwavelength holographic profilometry by laser mode selection,” Opt. Eng. **46**, 075601–075607 (2007). [CrossRef]

**7. **G. Mauvoisin, F. Brémand, and A. Lagarde, “Three-dimensional shape reconstruction by phase-shifting shadow Moiré,” Appl. Opt. **33**, 2163–2169 (1994). [CrossRef] [PubMed]

**8. **X. F. Meng, L. Z. Cai, X. F. Xu, X. L. Yang, X. X. Shen, G. Y. Dong, and Y. R. Wang, “Two-step phase-shifting interferometry and its application in image encryption,” Opt. Lett. **31**, 1414–1416 (2006). [CrossRef] [PubMed]

**9. **I. Yamaguchi, “Phase-shifting digital holography,” in *Digital Holography and Three-Dimensional Display*, T.-C. Poon, ed. (Springer, 2007), pp. 145–172.

**10. **J. Hahn, H. Kim, S.-W. Cho, and B. Lee, “Phase-shifting interferometry with genetic algorithm-based twin image noise elimination,” Appl. Opt. **47**, 4068–4076 (2008). [CrossRef] [PubMed]

**11. **D. Kim and Y. J. Cho, “3-D surface profile measurement using an acousto-optic tunable filter based spectral phase shifting technique,” J. Opt. Soc. Korea **12**, 281–287 (2008). [CrossRef]

**12. **M. Chang and C. S. Ho, “Phase measuring profilometry using sinusoidal grating,” Exp. Mech. **33**, 117–122 (1993). [CrossRef]

**13. **R. Zheng, Y. Wang, X. Zhang, and Y. Song, “Two-dimensional phase-measuring profilometry,” Appl. Opt. **44**, 954–958 (2005). [CrossRef] [PubMed]

**14. **M. Takeda, Q. Gu, M. Kinoshita, H. Takai, and Y. Takahashi, “Frequency-multiplex Fourier-transform profilometry: a single-shot three-dimensional shape measurement of objects with large height discontinuities and/or surface isolations,” Appl. Opt. **36**, 5347–5354 (1997). [CrossRef] [PubMed]

**15. **W. Su and H. Liu, “Calibration-based two-frequency projected fringe profilometry: a robust, accurate, and single-shot measurement for objects with large depth discontinuities,” Opt. Express **14**, 9178–9187 (2006). [CrossRef] [PubMed]

**16. **J.-L. Li, H.-J. Su, and X.-Y. Su, “Two-frequency grating used in phase-measuring profilometry,” Appl. Opt. **36**, 277–280 (1997). [CrossRef] [PubMed]

**17. **C. E. Towers, D. P. Towers, and J. D. C. Jones, “Optimum frequency selection in multifrequency interferometry,” Opt. Lett. **28**, 887–889 (2003). [CrossRef] [PubMed]

**18. **J. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A **20**, 106–115 (2003). [CrossRef]

**19. **J. Ryu, S. S. Hong, B. K. P. Horn, D. M. Freeman, and M. S. Mermelstein, “Multibeam interferometric illumination as the primary source of resolution in optical microscopy,” Appl. Phys. Lett. **88**, 171112 (2006). [CrossRef]

**20. **W.-J. Ryu, Y.-J. Kang, S.-H. Baik, and S.-J. Kang, “A study on the 3-D measurement by using digital projection Moiré method,” Opt. Int. J. Light Electron. Opt. **119**, 453–458 (2007). [CrossRef]

**21. **J. A. N. Buytaert and J. J. J. Dirckx, “Moiré profilometry using liquid crystals for projection and demodulation,” Opt. Express **16**, 179–193 (2008). [CrossRef] [PubMed]