## Abstract

Speckle patterns have high frequency phase data, which make it difficult to find the absolute phase of a single speckle pattern; however, the phase of the difference between two correlated speckle patterns can be determined. This is done by applying phase-shifting techniques to speckle interferometry, which will quantitatively determine the phase of double-exposure speckle measurements. The technique uses computer control to take data and calculate phase without an intermediate recording step. The randomness of the speckle causes noisy data points which are removed by data processing routines. One application of this technique is finding the phase of deformations, where up to ten waves of wavefront deformation can easily be measured. Results of deformations caused by tilt of a metal plate and a disbond in a honeycomb structure brazed to an aluminum plate are shown.

© 1985 Optical Society of America

## I. Introduction

Speckle interferometry techniques for double-exposure measurements such as electronic speckle pattern interferometry (ESPI) which measure deformations and contours have been in use for about 15 years.[1]–[6] These techniques produce correlation fringes which correspond to the object movement between exposures or the object’s shape. To obtain fringes corresponding to an object deformation, primary interferograms of the object are recorded using a TV camera before and after the deformation. These interferograms are then electronically processed to produce correlation fringes or a secondary interferogram, which corresponds exactly to the object movement between exposures. However, the correlation fringes are not clearly defined because they are composed of variations in speckle contrast [for example, see Fig. 5(a)]. Speckle interferometry techniques have been good for qualitative measurements but have not produced good quantitative results, since it is hard to determine the fringe centers.

Quantitative data can be obtained using phase-shifting interferometry (PSI).[7] PSI has been used to nondestructively test specular objects by shifting the phase of one beam in the interferometer with respect to the other. The phase of the test wavefront relative to the reference wavefront can be simply calculated from the interferogram intensities measured for multiple phase shifts. Double-exposure phase measurements (i.e., of deformations) have been made by combining double-exposure holography and phase-shifting interferometry in digital holographic interferometry (DHI).[8],[9] DHI requires the making of an intermediate hologram (using something such as a thermoplastic camera) with the test object in place. Then the object is deformed, and the phase of one beam in the interferometer is changed, and the interference between the wavefront recorded in the hologram and the deformed wavefront produces secondary interference fringes corresponding to the deformation. The intensities of these secondary fringes are recorded at different relative phase shifts so that the phase can be calculated. DHI has been used to quantitatively test both optically smooth and diffuse surfaces.

Phase-shifting can be applied to speckle interferometry to produce phase maps for double exposures of diffuse surfaces without an intermediate recording step. The phase maps are calculated from sets of phase-shifted intensity data taken before and after the deformation and then combined to produce the phase of the difference between exposures. Each set of phase-shifted intensity data is obtained in the same way as PSI. To find the phase of the object deformation, the phases of the speckle patterns before and after deformation are simply subtracted. This paper presents the theory for this technique along with experimental results and a discussion of its limitations.

## II. Technique and Limitations

Figure 1 shows a schematic of the speckle interferometer used in this work. This digital sepckle pattern interferometer (DSPI) is a variation on electronic speckle pattern interferometers which processes speckle data inside a computer rather than in hardware.[10] To get the information needed to obtain fringes or phase information, the detector array must resolve the speckle. In this case, the aperture limits the bundle of rays incident on the image plane to *f*/40 to produce 60-*μ*m speckles. A spherical reference wave is produced by a single-mode optical fiber mounted in the center of the aperture, and the object and reference beams are collinear at the detector array. A diffuse object is illuminated by a nearly collimated beam to conserve light and is imaged onto the face of the detector array. The detector array is a 100 × 100-element Reticon diode array, which is electronically interfaced to an HP-9836C desk-top computer. To measure a deformation using ESPI or DSPI, a speckle pattern exposure is taken of the object in one position. Then the object is deformed, and another exposure is taken. These two speckle patterns are subtracted, and their difference is squared to obtain speckle correlation fringes corresponding to the object’s deformation.

This process can be shown mathematically. The intensity in a speckle interferometer is written as

where *I*_{1} and *I*_{2} are the intensities of the reference and object beams, and *ϕ*(*x*,*y*) is the phase of the speckle pattern. The first two terms are known as self-interference terms because they relate to the interference of each beam in the interferometer with itself, and the third term is the cross-interference between the object and reference beams. The modulation, contrast, or visibility of the speckle pattern is given by the modulus of the cross-interference term. The modulation is an important factor for determining if data points are good or bad when using phase-shifting techniques. After the object has been deformed, the intensity becomes

where Δ*ϕ* is the phase change due to the object deformation. Secondary fringes are calculated from these two intensities by subtracting and taking the modulus squared. When an ensemble average is taken over many realizations of these secondary fringes, the secondary fringe function becomes

This fringe function is similar to fringes in a classical interferometer where the fringes are sinusoidally dependent on a relative phase difference; however, Eq. (3) depends on the phase difference between exposures rather than the phase difference between object and reference beams.

Phase-shifting interferometry (PSI) is a technique which can determine the shape of a surface or wavefront by calculating a phase map from the measured intensities. The technique used here was introduced by Carré to calculate the phase modulo 2*π*.[11] For each data frame, the intensity is integrated over the time it takes to move a reference mirror linearly pushed by a piezoelectric transducer (PZT) through a 2*α* (usually 90°) change in phase. Four frames of intensity data are recorded in this manner:

where *I*_{0} is the average intensity, and γ is the modulation of the interference term. The phase calculated at each detected point (*x*,*y*) in the interferogram is

where the phase falls between 0 and *π*/2 (quadrant 1). The signs of the quantities

determine in which quadrant to place the phase, because they are proportional to sin*ψ* and cos*ψ* When Eq. (6a) is negative and Eq. (6b) positive, *ψ* is subtracted from 2*π* to place it in quadrant 2. If both Eqs. (6a) and (6b) are negative, *π* is added to *ψ* to put the phase in quadrant 3. Finally, if Eq. (6a) is positive and Eq. (6b) is negative, *ψ* is subtracted from *π* to place it in quadrant 3. If the denominator of Eq. (5) equals zero, *ψ* equals either *π*/4 or 3*π*/4 depending on the sign of Eq. (6a), and when the numerator equals zero, *ψ* = *π*Fig. 5. (A) DSPI fringes of disbond in braze between a honeycomb structure and aluminum plate where the object was heated to deform it; (B) raw calculated phase of same structure; (C) contour map after smoothing, median window, and integration of raw phase.2*π* ambiguities are removed by adding or subtracting 2*π* from individual pixels until the phase difference between adjacent pixels is <*π*. Equation (5) calculates the phase independent of the phase shift 2*α* between frames of intensity data eliminating the need for PZT calibration. Since the phase shift may vary from point to point in the image, owing to tilt of the mirror as it is translated by the PZT, this method can more accurately calculate the phase than methods which depend on how much the phase is shifted. The optical path difference (OPD) for the object wavefront relative to the reference wavefront can be determined from the phase:

The OPD is related to surface heights by a multiplicative factor and accounts for angles of illumination and viewing which may differ from the surface normal. This factor is one-half for a double-pass inteferometer like a Twyman-Green. For phase-shifting techniques to work, the phase difference between adjacent pixels in the measured wavefront must be <*π* (one-half wave of OPD). This restriction limits the total number of waves departure measurable across the test surface and will limit the test sensitivity.

Speckle patterns have high frequency phase data which cause the phase difference between adjacent pixels to be >*π*. Thus the phase of a single speckle pattern can only be determined modulo 2*π*. However, the phase of the difference between two speckle patterns can be completely determined as long as the absolute phase change between exposures does not vary by more than *π* between adjacent detector elements. To find the phase of a deformation using speckle, four frames of intensity data are taken while shifting the phase of one beam in the interferometer. The object is then deformed, and four more frames of intensity data are taken while shifting the phase the same amount as for the first set of data. Once these data are taken, the phase of the deformation can be calculated from

where *ψ* is given by Eq. (5). This calculation yields phases which are modulo 2*π*. The 2*π* ambiguities can be removed as described for PSI.

When phase-shifting techniques are applied to speckle interferometry, the randomness of the speckle will cause many noisy data points. Other than points where the intensity saturates the detector element, there are two main contributors to these noisy points which are fundamental limitations to this technique. They are (1) decorrelation of the speckle between the exposures before and after deformation, and (2) low modulation of the measured intensity at a given pixel as the phase is shifted.

Speckle decorrelation in this case is caused by changes in the collected scattering angles from the object which contribute to a single image point. Each point on the object imaged onto the detector array scatters light into a large solid angle. The *f*/40 optical system will only collect a small portion of this light. As the object is deformed, points on it will tilt, and different scattering contributions will be collected by the optical system than before the deformation. These tilted points will have a displaced collecting cone compared to the undeformed points (see Fig. 2). Thus the intensity and phase of the imaged speckle will change slightly. For the system shown in Fig. 1, five waves of surface tilt will displace the collecting angle by 3%. A statistical analysis of the speckle decorrelation for this amount of tilt shows a 33° rms phase error.[16] One way to reduce this problem is by using a larger aperture, but this produces smaller speckles, so that a detector array with smaller pixels would be needed. Simply changing the geometry of the system would not solve this problem. Because the phases may be slightly off, this will cause problems in removing 2*π* ambiguities.

Low modulation is a fundamental problem in all phase-shifting techniques.[12] Because speckle is sampled with a finite-sized detector element, the intensity measured for that detector element is going to be an average over its area. To resolve the speckle in the image plane, the *f*/No. of the system is set to make the speckle size equal to the pixel size. With this speckle size, two speckles will influence one pixel on the average.[13] For only one speckle to influence each detector, the detector size would have to be about a tenth of the speckle size. If the aperture were made this small, very little light would get through. When the phase of one beam in the interferometer is shifted, the intensity of the speckle will change; but, with a detector element the size of the speckle, the intensity will not modulate as much as it would for point sampling. This is the effect of averaging over a pixel with more than one speckle influencing it.

Both problems could be corrected by using point detector elements. A point detector would enable a larger *f*/No. to be used, so that the problem with the speckle correlation could be reduced, and it would increase the modulation by reducing the averaging effects. The spacing between detectors is not critical, because it is assumed that a speckle is incident on the same detector both before and after deformation. The only constraint on spacing is that from one point detector to the next, the change in phase due to deformation cannot be larger than *π*. This is a consequence of the sampling theorem, and it influences the overall sensitivity (how many waves of deformation can be measured) of the test.

Data points which are saturated or have low modulation can be removed from the data sets. Points where the phase is slightly off due to a decorrelation between speckles will be modulated sufficiently and not removed; but they still cause problems and will be discussed in more detail later. To remove most noisy data points, the intensity measured in the interferometer [Eq. (l)] is tested to determine if the data points are good. There are three contributions to bad pixels: (1) the data point is saturated; (2) its modulation as the phase is shifted is not greater than some *I*_{min}; and (3) a given pixel is bad in either data set. The modulation of the speckle is determined by calculating the magnitude of the cross-interference term and checking it against an intensity minimum:

This quantity can be calculated from the four intensity frames given in Eq. (4) by

where this equation assumes that 2*α* is ~90°. It should be noted that *I*_{min} can be related to the amount of electronic noise present if point detectors are used.[14]

The integration routine first passes along the rows and then the columns. It adds or subtracts 2*π* to a single pixel until the difference between adjacent pixels is <*π*. If there are bad pixels (those that have been thrown out), they are skipped, and the phase difference is check between nearest good pixels. These missing bad points sometimes cause problems, because the phase may change a lot between the good pixels. Data points which are slightly off in phase will not be flagged by the bad pixel tests. The result of having points that slightly err in phase is that from one point to the next the phase may jump by 2*π* and then jump back. The integration routine can handle these problems; but, if the phase jumps by 2*π* and then returns only partially at the next point, the integration will add 2*π* and then not subtract it again afterward as it should. This problem increases as the amount of phase deformation is increased. To reduce the effect of these points, some data processing must be done. The integration problems encountered for slight phase errors are not present when this technique is used with a Twyman-Green in-teferometer to test specular surfaces.

To improve the results of integrating the phase, two processing routines have been implemented. The first is a median window,[15] which takes all the data points contained within the window, sorts them until the median is found, and then places the median value into a location in another array corresponding to the center of the window. The window is then moved over by one pixel, and the procedure is repeated until the entire array has been processed. Features which are smaller than (*N* + l)/2 in extent are removed, where *N* is the 1-D size of the window. Features greater than 2*N* in extent are retained by the median window to preserve edges. This is important when processing raw phase data, because boundaries where the phase jumps by 2*π* need to be preserved.

The median window does a good job filling in bad data points but does not change the pixel values where the phase errs. To smooth data points where the phase is slightly off and preserve edges as well, another processing routine was developed. This routine compares each data point to eight nearly pixels in the form of a plus sign. If most of the nearby points are close in phase value to the center pixel, it is left alone. But, if most of them are different, the center pixel is replaced with the average of the good nearby pixels. By combining this routine with the median window, the effect of 2*π* jumps by the integration routine which do not return to the desired phase can be reduced or eliminated. Hence good phase contour maps may be generated.

## III. Experimental Results

A computer program was written to calculate phase from double-exposure measurements. This program controls the PZT, transfers data from the detector array, and calculates the phase at each data point as described earlier. It also utilizes the bad data point tests for removing pixels with low modulation and has the ability to smooth the data and apply median windows of different sizes.

The optical layout of the interferometer, which was described earlier, is shown in Fig. 1. A 10-mW He–Ne laser provides the source, while a variable beam splitter controls the ratio of intensities between the object and reference beams to optimize the cross-interference term. The PZT is linearly ramped by the computer while the detector array integrates the intensity for 20 msec/frame. The object is both illuminated and viewed at 45°. A 3- × 4-cm illuminated area on the object is imaged onto the detector array. A wedge is placed on the window over the detector array to reduce the effect of interference fringes between its front and rear surfaces. Because the single-mode optical fiber preserves polarization and has a preferred coupling axis, halfwave plates are placed in each beam to align the polarizations at the image plane. Finally, the difference in path length between the beams must be within the coherence length of the laser to ensure good interference.

To determine the precision of this technique, sets of data are taken without deforming the object. Thus the phases of two identical speckle patterns (except for air turbulence) are subtracted and then integrated. The resulting phase distribution has a peak-to-valley (P–V) of λ/20 and an rms of λ/250. When the data are processed using smoothing and a 3 × 3 median window before integration, the P–V is λ/40, and the rms is unchanged. This test accounts for electronic noise and low modulation but does not include speckle decorrelation.

Figure 3(A) shows the raw phase obtained from tilting a rough metal plate between the two sets of phase data. The modulation threshold *I*_{min} was set at 5, which removed 4% of the data pixels. These data have a very salt-and-peppery appearance due to the randomness of the speckle. When these raw phase data are integrated [Fig. 3(B)] to remove 2*π* ambiguities, streaking is caused by points where the phase errs. These streaks can be removed along with filling in bad data points by using a 5 × 5 median window [Fig. 3(C)]. The final contour map of the deformation is in Fig. 3(D), and it has a P–V of 3.5 waves OPD.

When a deformation of less than 5 waves OPD is introduced, the raw phase data can be directly integrated and bad data points filled in without any problem; but, for larger deformations, the raw phase data must be smoothed and median windowed before integration. Figure 4(A) shows raw phase data for a tilt deformation of about eight waves. With *I*_{min} = 8, 9.6% of the data were flagged as bad. When integrated, large areas of the image are incorrectly integrated owing to phase errors and bad pixels [Fig. 4(b)]. To help this problem, the data are smoothed and followed by a 3 × 3 median window which fills in bad points [Fig. 4(c)]. After these smoothed data are integrated, a few streaks are present caused by the points which still err in phase. These residual streaks can be removed by a 5 × 5 median window. The contour map of this tilt [Fig. 4(D)] shows a P–V wavefront deformation of 7.8 waves tilt. The smoothing of the data enables larger deformations to be processed. The ultimate sensitivity limit is the speckle noise due to decorrelation and low modulation. For larger deformations, higher *I*_{min} values need to be used. This causes more data points to be flagged as bad and not used. Data points have to be smoothed and filled in to obtain good results.

A final example is a disbonded area in a honeycomb structure brazed to an aluminum plate. Speckle correlation fringes using DSPI are shown in Fig. 5(A). The object was deformed between exposures by heating the aluminum plate, where the number of fringes seen relates to how much the disbonded area was heated. Figure 5(B) shows the raw calculated phase with *I*_{min} = 2. After processing these data in the same manner as the last example, the resulting contour map of the disbonded area has a P–V of 2.25 waves OPD [Fig. 5(C)]. This technique enables the location, size, and shape of the defect to be determined quantitatively.

## V. Conclusions

The phase contour of an object deformation can be determined quantitatively by applying phase-shifting techniques to speckle interferometry. This allows the location, size, and shape of a defect to be measured. Because the phase is calculated directly, no intermediate recording step is needed, and there is no need to process speckle data to form speckle correlation fringes and locate fringe centers. The ultimate limitations to this technique are pixels with low intensity modulation as the phase is changed, and pixels having decorrelation of the speckles between data recorded before and after deformation. These limitations cause phase errors but can be smoothed or filled in by processing the data to enable measurement of larger deformations. Over a 100-element wide detector array, ten waves of wavefront deformation are measurable to within λ/10. The measurement range could be extended with the use of point detectors, because they would reduce the effects of speckle noise and low modulation.

This same algorithm can be applied to both specular and diffuse surfaces by changing the optical setup. When this technique is used with a Twyman-Green interferometer and a specular object, the speckle de-correlation effects are not seen; however, there still is low modulation of some data points. Another application of this technique utilizing speckle is contouring using either two wavelengths or two angles of illumination.

The author wishes to thank Gudmunn Slettemoen and Jim Wyant for their helpful discussions.

## Figures

## References

**1. **J. N. Butters and J. A. Leendertz, “Speckle Pattern and Holo graphic Techniques in Engineering Metrology,” Opt. Laser Technol. **3**, 26 (1971). [CrossRef]

**2. **R. Jones and C. Wykes, “General Parameters for the Design and Optimization of Electronic Speckle Pattern Interferometers,” Opt. Acta **28**, 949 (1981). [CrossRef]

**3. **C. Wykes, “Use of Electronic Speckle Pattern Interferometry (ESPI) in the Measurement of Static and Dynamic Surface Displacements,” Opt. Eng. **21**, 400 (1982). [CrossRef]

**4. **R. Jones and C. Wykes, *Holographic and Speckle Interferometry* (Cambridge U. P., London, 1983).

**5. **G. Å. Slettemoen, “General Analysis of Fringe Contrast in Electronic Speckle Pattern Interferometry,” Opt. Acta **26**, 313 (1979). [CrossRef]

**6. **O. J. Løkberg, “Advances and Applications of Electronic Speckle Pattern Interferometery (ESPI),” Proc. Soc. Photo-Opt. Instrum. Eng. **215**, 92 (1980).

**7. **J. C. Wyant *et al.*, “An Optical Profilometer for Surface Characterization of Magnetic Media,” ASLE Trans. **27**, 101 (1984). [CrossRef]

**8. **P. Hariharan, *Optical Holography* (Cambridge U. P., London, 1984).

**9. **P. Hariharan, B. F. Oreb, and N. Brown, “A Digital Phase-Measurement System for Real-Time Holographic Interferometry,” Opt. Commun. **41**, 393 (1982). [CrossRef]

**10. **K. Creath, “Digital Speckle Pattern Interferometry (DSPI) Using a 100 × 100 Imaging Array,” Proc. Soc. Photo-Opt. Instrum. Eng. **501**, 292 (1984).

**11. **P. Carré, “Installation et utilisation du compateur photoelectrique et interferential du Bureau International des Poids et Mesures,” Metrologia **2**, 13(1966). [CrossRef]

**12. **K. Creath, Y.-Y. Cheng, and J. C. Wyant, “Contouring Aspheric Surfaces Using Two-Wavelength Phase-Shifting Interferometry,” to be published in Opt. Acta.32, (1985). [CrossRef]

**13. **J. W. Goodman, “Statistical Properties of Laser Speckle Patterns,” in *Laser Speckle and Related Phenomena*, J. C. Dainty, Ed. (Springer-Verlag, Berlin, 1975). [CrossRef]

**14. **Å. Slettemoen and J. C. Wyant, “Maximal Fraction of Accepted Measurements in Phase-Shifting Speckle Interferometry: a Theoretical Study,” to be published in J. Opt. Soc. Am. A2 (1985).

**15. **B. R. Frieden, “Some Statistical Properties of the Median Window,” Proc. Soc. Photo-Opt. Instrum. Eng. **373**, 219 (1981).

**16. **K. Creath, “Phase-Shifting Speckle Interferometry,” Proc. Soc. Photo-Opt. Instrum. Eng.556, in press (1985).