Abstract
We present a method based on color encoding for measurement of transient 3D deformation in diffuse objects. The object is illuminated by structured light that consists of a fringe pattern with cyan fringes embedded in a white background. Color images are registered and information on each color channel is then separated. Surface features appear on the blue channel while fringes on the red channel. The in-plane components of displacement are calculated via digital correlation of the texture images. Likewise, the resulting fringes serve for the measuring of the out-of-plane component. As crossing of information between signals is avoided, the accuracy of the method is high. This is confirmed by a series of displacement measurements of an aluminum plate.
©2011 Optical Society of America
1. Introduction
Recently it has been shown that combining digital image correlation (DIC) and fringe projection (FP) may be adequate for the simultaneous measurement of the three components of deformation in solid objects [1, 2]. In general, a complete characterization of displacement fields must contain the 2-D in-plane displacement components and the out-of-plane component. The two mutually perpendicular in-plane components of displacement are obtained by the correlation technique whereas the out-of-plane component is addressed by fringe projection. In both techniques, two images of the object for two different states are captured (reference image and displaced image, respectively). As the object undergoes deformation, the corresponding spatial structures carrying the information (variations of intensity of the object surface in the case of digital correlation and fringes in fringe projection) move accordingly. Thus, it is possible the calculation of displacement fields by comparing the reference image and the displaced image. If the target deformational event is relatively slow then all three components of displacement may be obtained simultaneously.
To widen the range of events it is desirable that the information required by DIC and FP be contained in just one image. This has been shown in [1], where separation of the signals is carried out by Fourier transform; the fringe information is filtered out in the Fourier domain, yielding a fringe-free image ready to be used by DIC. In this case, the carrier frequency of FP has to be larger than the maximum signal frequency of the object surface. As pointed out in [3], this may limit the size of the region of interest and the range of in-plane displacements. An alternative is encoding the signals for both DIC and FP in the RGB signal of a color image [3]. Color encoding has been used in 3D shape recovering [4, 5] and fluid studies [6, 7]. As shown in [3], the necessary information for measuring 3D displacement is contained in just one RGB image, and calibration procedures are not required as in methods that use multiple cameras, such as 3D DIC [8, 9]. In [3], red spots (speckles) that serve as the DIC signal are directly printed onto the object surface and fringes with blue and white portions are projected simultaneously. By the use of a color camera, the signals for DIC and FP can then be separated. However, as the red spots appear in the fringe information they reduce the accuracy of the technique; for alleviating the problem, application of a directional filter was applied. In the present work, we propose an improvement to the color encoding technique that reduces significantly the presence of speckles in the image of fringes, thus increasing the accuracy of the technique.
2. Theoretical background
2.1 DIC
An image of the surface features (white-light speckle) of an object illuminated by white light may serve as the carrier signal for DIC. A schematic of an optical arrangement for DIC is shown in Fig. 1 . By taking two images of the object by a CCD camera, before and after deformation, the relative displacement between these images can be found by cross correlating corresponding subimages [10, 11]. Take and as the distributions of intensity of the reference and displaced subimages, respectively; then, we may assume that; relative displacements may be determined by the two-dimensional correlation function defined as
which in turn may be obtained by the Fourier transform,where, and denote the Fourier transform ofandrespectively. The inverse Fourier transform operator is indicated byand the frequency domain variables by . The coordinates of the location of the maximum of the correlation map correspond directly to the desired in-plane displacements. Subpixel resolution in the displacements can be achieved by fitting Gaussian [10] or paraboloidal [11] functions to the values defining the maximum peak of correlation.2.2 Fringe projection
Fringe projection is a valuable tool for the calculation of out-of-plane displacements such as those involved in vibration analyses [12]. In this case, by the aid of a projector, a pattern with straight fringes is projected onto the surface of an object. The optical arrangement is identical to the one used for DIC. The camera captures images of the object surface as deformation is applied. For a reference state, assuming a plane object, the fringes may deviate from straightness due to projection effects and optical aberrations of the imaging lens. If these effects are assumed to remain constant for both the deformed image and the reference image, then they can be readily compensated by subtracting the corresponding phases. As the object undergoes deformation, the out-of-plane component of displacement makes the fringes depart further from straightness. Deviations from straightness of the fringes may be up to several periods of the projected grating. The lateral deviation from straightness for each point is related directly with the change of phase by with being the period of the grating which is taken as constant for a telecentric system; from Fig. 1 we can see that the out-of-plane displacement component is given by where is the projection angle; normal observation is considered as well as shown in Fig. 1. By introducing an equivalent period the last equation can be rewritten as
The change of phase can be calculated via the Fourier method [13] as follows. Let the reference image be expressed by [14]
where is the background illumination, the modulation term, a carrier frequency that allows us the use of the Fourier method for automatic phase calculation, and a phase term that accounts for projection and aberration effects. Likewise, the displaced image may be defined byThe arguments of Eqs. (3) and (4) can be calculated by the Fourier method. For example, for the reference image, after applying the Fourier transform operator we obtain
whereandThe asterisk denotes the operation of complex conjugation.Then we apply the inverse Fourier transform to a band-pass filtered version of Eq. (5) that isolates one of its spectrum side lobes (centered at the carrier frequency ),
whereand Therefore, the reference argument can be obtained byIn a similar way, the argument of the displaced image can be calculated; and the desired phase term can be finally found by subtracting directly the latter two arguments or alternatively by [15]
where subscripts 1 and 2 correspond to the reference and displaced images, respectively.Phase maps calculated by Eq. (8) may give rise to wrapped phase maps when absolute difference-of-phase values are larger than rad.
2.3 One-shot method by FP and DIC
For the analysis of deformation of relatively fast phenomena, such as vibration studies, it is desirable that one image contain all the necessary information. This may be fulfilled by combining DIC and FP. To register simultaneously the signals for both techniques, an image with cyan stripes (a combination of blue and green, with RGB values of [0,255,255]) embedded in a white background [255,255,255] is considered. A computer-generated structured pattern considering this idea is shown in Fig. 2(a) . The ratio of the cyan part to the white part of each fringe is assumed to be 0.25. The resulting illuminating spatial pattern is a binary grating instead of a sinusoidal type as required by Eq. (3). However, this fact does not change the theory given above since any harmonic side lobes can be filtered out readily when applying the band-pass filter in the Fourier method. Furthermore, when considering the optical transfer functions of the involved optical components (projector, imaging lens and camera sensor), the obtained profile of the fringes departs from being squared and look a bit as sinusoidal, see the red plot in Fig. 2(d) where an equivalent grating period of 1.93 mm is used. With a white-painted object, the surface texture information appears on the blue channel since the blue region of the projected image presents a similar intensity to that of the white region, and hence no fringes result. Unlike this, fringe information appears on the red channel as the cyan portion of the projected image is blocked and therefore appears black. As the white portion of the projected image results with nonzero intensity then a pattern of binary fringes results, see Fig. 2(e). The corresponding image for the blue channel, for Fig. 2(b), is shown in Fig. 2(f). Cross sections of the Fourier transform for these two latter images are shown in Fig. 2(g). As observed, a residual signal at the carrier frequency is present in the blue image. If the strength of this signal is significant it may affect the in-plane measurements. In the results presented below this was not the case.
As noted from Fig. 2(b), in the registered image, the white part of the fringes acquires a greenish hue. This effect arises from the mechanisms of color balancing of the camera and it increases as the grating pitch decreases. Cross sections of Figs. 2(a) and (b) are shown in Figs. 2(c) and (d).
In Fig. 3 we show another example but with an equivalent period of 9.90 mm. As it is observed, the white part of the fringe, despite being larger than before, also acquires a bluish hue because of artifacts of the camera. For better results in this case, the cyanide color was set as [0,255,200]. By observing Fig. 3(f), it may be apparent that the blue value may be lowered further in order to reduce the observed residual fringes in the speckle image, but the camera in that case starts yielding distorted images. Results of the accuracy of the method as a function of the equivalent period are discussed in Section 3.
For colored objects, partial absorption of the illuminating light occurs and contrast of the signals for FP and DIC is expected to decrease. This may imply a reduction of the accuracy of the proposed method. An analysis of this issue should have to take into account the spectral characteristics of the projector, the coupling effects between neighboring channels and the color characteristics of the object [16]. An example of a colored object is shown in Fig. 4 . The mean color of the object when illuminated by white light is [190,93,110], see Fig. 4(h). For neutral colored objects, selection of the illuminating light should take into account the color content of the object. For the current example we use illuminating light composed by green fringes [0,255,60] embedded in a magenta background [127,0,255]. As noticed from Figs. 4(e) and (f), the separation of the signals is still possible. However, for some particular colors, such as pure red hues, the method is not adequate.
2.4 Influence of residual speckle on the accuracy of FP
When particles (white-light speckles) are directly painted on the object as in [3], they ultimately appear in the fringe image as multiplicative noise. This effect may be represented as a random variation of the object reflectivity Thus, the intensity recorded by the camera for a certain object state can be expressed as [5]
In our case, the representation of fringe images with residual white-light speckles follows that given by Eq. (9) as well, but the contrast of the speckle is relatively low. When applying the Fourier method to Eq. (9), the expression for the band-pass filtered side lobe in the Fourier domain, Eq. (6), should be modified to consider high-frequency components of the central lobe that may permeate into the side lobe. In this case, recovering of the phase term is not warrantied. However, if Eq. (6) is used with no modification, a reduction of the accuracy of the FP method results. An additional source of error for the Fourier method arises from the digitalization of the high-frequency signals that the speckle field represents. This problem has been previously pointed out in [17].
To show the influence of the white-light speckle on the FP accuracy, we carry out numerical simulations by using Eq. (8) with the following values: withand The object reflectivity is expressed as a summation of Gaussian functions, whose centers are randomly selected, and have constant radii By varying the latter parameter and the number of Gaussian functions the level of disturbance caused by the residual speckle field can be adjusted. Furthermore, adjustment of the speckle contrast can be achieved by considering a normalized version of wheredenotes a normalization operator andandthe background and the contrast of the speckle field (with), respectively. Thus, the resulting effect caused by the residual speckles can be varied by eitheror by any combination of them.
The aspect of the simulated images with fringes can be altered as well by changing the modulation term For relatively small fringe modulation the influence of the speckle field is enhanced. In Fig. 5 we show three simulated displaced images;is taken as . Numerical simulations are done in Matlab Version 7. The reference image is not shown, but is also obtained from Eq. (8) with Fig. 5(a) corresponds to a speckle-free image with while b) and c) to images with speckle content; for Fig. 5(b) we set wherepix (the size of the image), and and for Fig. 5(c), and For these two last cases, modulationis measured directly from the experimental images shown in Figs. 5(d) and (e), respectively. Calculation of is done by using Eq. (6) as
The relative errors for the synthetic images, Figs. 5(a-c), after applying the Fourier method, are 0.1, 1.6 and 3.6%, respectively. As expected, the accuracy of the FP method varies inversely with the contrast of the speckle field and directly with the modulation of the fringe pattern. Recently, a similar result has been found for fringe projection using laser interference [18].
As described in the next section, experimental cases resembling images in Figs. 5(b) and (c) occur when small projected periods are used, i.e. Figs. 5(d) and (e), respectively. In that case, the object texture affects significantly the contrast of the projected fringes. In both Figs. 5(d) and (e) a projected period of 0.78 mm is used, but the roughness in Fig. 5(e) is larger than in Fig. 5(d). The calculated relative errors for these two cases are 2.6 and 7.0%, respectively. The complete measurement of experimental errors is given in the following section, where the roughness of the object is maintained unchanged and corresponds to the case illustrated in Fig. 5(d).
3. Experimental results
For the experimental results, the arrangement employed for DIC and FP is illustrated in Fig. 6 . It consists of a high-definition three-LCD Panasonic projector (PT-AE2000U, 1500 lm) and an Olympus camera (Camedia C8080WZ, 2/3” Bayer mosaic-based CCD sensor, 8 Mpix) which renders images of 3264x2448 pix. The imaging system is set to an f-number (f#) of 3.5 and exposure time of 1/1000 s. Distances from the object to projector and from the camera to projector are selected as to minimize projection effects [14]. Also, the projection angle is set to 22° and the projected period is selected to take on the values: 0.78, 1.00, 2.00, 3.00 and 4.00 mm (corresponding equivalent periods of 1.93, 2.48, 4.95, 7.43 and 9.90 mm). The object is an aluminum plate of dimensions 300x300x6.35 mm whose surface is sprayed white by applying a powder developer (Ardrox 9D4A). The width of the region of observation is not constant: 164, 170 and 250 mm; corresponding the first value to the projected period of 0.78 mm, the second value to period 1.0 mm, and the last value to periods 2.0, 3.0 and 4.0 mm. The three main components of the layout, CCD camera, object and projector, are mounted on tripods as shown in Fig. 6(a). Care was taken in the alignment of the experimental setup, in particular to prevent the appearance of any significant in-plane displacement when the object is given an out-of-plane displacement, and similarly for given in-plane displacements.
The experiment consists in translating the aluminum plate by known steps along the coordinate axes. The steps are given by a step-motor driven translation stage from Thorlabs with 1.25 micrometers per step. For each projected period, the aluminum plate is given the following displacements in mm (out-of-plane and in-plane displacements): 0.063, 0.125, 0.250, 0.500, 0.750, 1.000, 1.250, 1.500, 1.750 and 2.000, which in pixels are (for mm) 0.8, 1.6, 3.3, 6.5, 9.8, 13.1, 16.3, 19.6, and 26.1. In Fig. 6(c) we show a typical image produced by the system when the illuminating light corresponds to black-and-white structured light.
In Fig. 7 we show results of the obtained accuracy of the method for the five different grating periods, for both out-of-plane and in-plane displacements. For each given displacement, three measurements were done, and the average relative error is calculated. In addition to this, results using standard methods for FP and DIC are included and are indicated by letters BW. These standard results were obtained by projecting black-and-white fringes for FP and uniform white light for DIC, and serve as reference values. Furthermore, they were implemented in a non-simultaneous way.
On calculation of in-plane displacements, software for correlation of images from IDT (proVISION-XS) is used. A subimage size of 64x64 pix is used. It is worth noting to point out that when calculating out-of-plane displacements, phase unwrapping is not required as all the equivalent periods are greater than the displacements (an exception case is the displacement of 2.0 mm combined with an equivalent period of 1.931 mm). Besides, the duty cycle of the period was set to 75% in order to reduce the size of the region that is partially darkened by the cyan portion of the fringes, which may affect the speckle information.
From Fig. 7 it is observed that the overall performance of DIC is greater than that for FP, for the present conditions. This result was found in previous works as well [1, 3]. From Fig. 7(a), we see that the accuracy obtained for both small and large out-of-plane displacements is high in the case of small grating periods (case of mm). It would be expected that the smallest period, mm, give still better results, but the discrepancies between the given and the measured displacements rather increase. This is due to the fact that a notable reduction of the contrast of the fringes occurs when the camera lens gets close to its resolving limit. Additionally, as the period increases, the error increases on average slightly. Notice also the reliability of the proposed method by comparing the relative errors obtained by the standard method (labeled by BW) and by the proposed method. For the reference results, periods of 1.00 and 4.00 mm were used (1.00 BW and 4.00BW in Fig. 7(a), respectively).
On the other hand, from Fig. 7(b) it is seen that the in-plane error is relatively large for small displacements. Furthermore, since the in-plane movement is uniform for the whole region of observation, the resulting rigid body motion can be compensated readily and the error is almost independent of the given displacement. However, for an arbitrary distribution of in-plane displacement, an average in-plane displacement cannot be used for compensation, and in general the error would increase with displacement [19]. Additionally, the in-plane error should not depend on the projected period if correct color encoding is carried out. This is confirmed by noticing the similarity of all curves in Fig. 7(b) which include also the reference result (labeled BW). This similarity of the results also implies that the presence of residual fringes in the speckle images does not affect the in-plane measurements obtained by the proposed method.
As results in Fig. 7 suggest, the average error for the present method is around 1% when projected periods of 1.0 mm are employed. When comparing this value with those obtained in [1] and [3], which are within 2 and 5%, we can say that the present method shows a good performance. This holds even for uniformly colored objects, such as the one shown in Fig. 4, where the resulting errors were similar to the ones presented in Fig. 7.
In the case of out-of-plane measurements, if the corresponding speckle images are processed by DIC, then we obtain a radial-like vector field caused by the angular field of view of the object, as shown in Fig. 8(a) . In this figure, for the case of an out-of-plane movement of 1.5 mm, a maximum in-plane displacement of 0.26 mm is found. This value agrees with that found by using the parameters of the setup (object-to-imaging lens distance of 885 mm and observation size of 250 mm). For three-dimensional displacements, the added nonuniform in-plane displacement arising from the out-of-plane displacement will increase the in-plane error. This, however, can be prevented by the use of a telecentric imaging lens [1] or may be numerically compensated.
Similarly, for in-plane measurements, by processing the corresponding fringe images via correlation, we find that the average absolute residual displacement is close to zero, see Fig. 8(b). This implies one thing: the speckle information does not appear on the image of fringes; thus, this warranties a considerable reduction of the levels of noise in out-of-plane displacement measurements. Hence, as commented above, it makes unnecessary the application of any directional low-pass filter to the images of fringes.
4. Conclusions
We presented a color-encoded method that permits the measurement of 3D deformation for opaque objects by a combination of fringe projection and digital image correlation. To achieve this, we used an illuminating image that consisted of a pattern of cyan stripes embedded in a white background. For white-sprayed objects the signal information for DIC was encoded on the blue channel of a recorded image and that for FP on the red channel. To show the feasibility of the technique, a series of measurements was conducted and the error was computed. We found that the overall performance of DIC was better than that for FP with the reported conditions. Furthermore, it was shown that as the content of the DIC signal (speckles) in the resulting images of fringes is quite low, a high level of accuracy of 3D displacement measurements could be achieved.
Having all the information required for 3D deformation calculation in one image allows us to carry out analyses of relatively fast events.
References and links
1. C. J. Tay, C. Quan, T. Wu, and Y. H. Huang, “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Opt. Eng. 43(5), 1152–1159 (2004). [CrossRef]
2. B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008). [CrossRef]
3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]
4. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef] [PubMed]
5. L. Fu, Z. Li, L. Yang, Q. Yang, and A. He, “New phase measurement profilometry by grating projection,” Opt. Eng. 45(7), 073601 (2006). [CrossRef]
6. H. G. Park, D. Dabiri, and M. Gharib, “Digital particle image velocimetry/thermometry and application to the wake of a heated circular cylinder,” Exp. Fluids 30(3), 327–338 (2001). [CrossRef]
7. C. Brücker, “3-D PIV via spatial correlation in a color-coded light-sheet,” Exp. Fluids 21 (4), 312–314 (1996). [CrossRef]
8. P. Synnergren and M. Sjodahl, “A stereoscopic digital speckle photography system for 3-D displacement field measurements,” Opt. Lasers Eng. 31(6), 425–443 (1999). [CrossRef]
9. A. K. Prasad, “Stereoscopic particle image velocimetry,” Exp. Fluids 29(2), 103–116 (2000). [CrossRef]
10. M. Raffel, C. Willert, and J. Kompenhans, Particle image velocimetry, a practical guide, (Springer-Verlag, 1998).
11. D. J. Chen, F. P. Chiang, Y. S. Tan, and H. S. Don, “Digital speckle-displacement measurement using a complex spectrum method,” Appl. Opt. 32(11), 1839–1849 (1993). [CrossRef] [PubMed]
12. B. Barrientos, M. Cywiak, W. K. Lee, and P. Bryanston-Cross, “Measurement of dynamic deformation using a superimposed grating,” Rev. Mex. Fis. 50(1), 12–18 (2004).
13. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982). [CrossRef]
14. K. J. Gasvik, Optical Metrology, (3rd Ed. John Wiley and Sons, Sussex 2003).
15. T. Kreis, “Digital holographic interference-phase measurement using the Fourier-transform method,” J. Opt. Soc. Am. A 3(6), 847–855 (1986). [CrossRef]
16. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998). [CrossRef]
17. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]
18. S. Rosendahl, E. Hällstig, P. Gren, and M. Sjödahl, “Phase errors due to speckles in laser fringe projection,” Appl. Opt. 49(11), 2047–2053 (2010). [CrossRef] [PubMed]
19. J. Westerweel, “Fundamentals of digital particle image velocimetry,” Meas. Sci. Technol. 8(12), 1379–1392 (1997). [CrossRef]