The velocity measurement limit in dynamic interferometry is vNyq, the velocity at which the interferogram is sampled at the Nyquist limit. We show that vNyq can be exceeded by assuming continuity of the surface motion and unwrapping the velocity modulo 2vNyq. The technique was demonstrated in a high-speed speckle pattern interferometer with spatial phase stepping. Surface velocities of 4vNyq were measured experimentally. With a reduced exposure, high-speed sub-Nyquist interferometry could be implemented up to a maximum acceleration of vNyq/ts, where ts is the detector frame period.
© 2011 OSA
Full-field, transient deformation measurements with interferometry have become routine with the availability of inexpensive high-speed detectors. The maximum velocity that can be measured and the number of measurement points are limited by the data transfer rate of the detector: as the velocity increases, an increased frame rate is required to sample the interferogram, thus reducing the number of measurement points at a given data transfer rate. In this research, spatial phase stepping was introduced to a high-speed interferometer in a way to increase both the velocity measurement range and the effective data transfer rate. By assuming that the velocity is continuous, a further increase in both quantities was achieved. These methods were implemented for vibration measurement with speckle interferometry, but are applicable to other types of transient deformation measured with other optical systems.
Vibration amplitude and relative vibration phase have been measured simultaneously at a number of separated points with high-speed speckle interferometry . The system used a line-scan camera (256 pixels) operating at 100,000 frames per second (fps) and temporal phase stepping. It operated as a multipoint vibrometer that measured velocities up to 3.2 mm/s . The measurement of relative vibration phase at separated points is not possible with a conventional single point laser vibrometer due to the low bandwidth scanning of the measurement point; nor is it possible with standard full-field techniques such as time-averaged speckle interferometry due to the low bandwidth of the detector. Speckle interferometry with temporal phase stepping has also been used to measure continuous surface deformations at lower speeds but at more measurement points. A maximum velocity of ~25 µm/s was measured over 239 × 192 pixels at 1,000 fps , and ~1 µm/s was measured over 512 × 512 pixels at 40 fps . The data transfer rate from the camera was similar in each case, being 25.6, 45.9 and 10.5 Mpixels/s in ,  and  respectively. The phase evolution with time can also be determined at each pixel independently by Fourier or wavelet analysis .1]:5,6].
Spatial phase stepping, which separates phase-stepped interferograms to one or multiple detectors, eliminates and potentially enables the velocity limit to be increased to . Separating images to multiple detectors is not practical for most high-speed applications, where the additional hardware cost can quickly become prohibitive. Separating the images to distinct regions of a single detector is an alternative, although the resolution of each image is then reduced. If three (or more) phase-stepped channels are recorded on a single detector, the increase in measurement range to will be offset by the reduced number of pixels at which the measurement is made. In other words, the same increase in maximum velocity could be achieved at fewer pixels by increasing the camera frame rate by three (or more) times in a temporal phase stepped system, and so the effective data transfer rate from the camera has not been improved.
In this paper, two spatial phase stepped channels were recorded on a single high-speed detector through the use of binary phase gratings. The maximum velocity range was increased to and the effective data transfer rate from the camera was increased. It is shown in the following section that by assuming that the velocity is continuous, measurements beyond can be made and so increase further the effective data transfer rate.
2. High-speed sub-Nyquist interferometry
For an interferometer using continuous-wave illumination, the interference intensity recorded by a detector at pixel and at time , integrated over the frame exposure , is given by :Eq. (5) gives an immediate indication of the interferometer performance without referring to the particular laser wavelength and camera frame rate .
For spatial phase stepping, the intensity at corresponding pixels recorded on N separate regions of a single detector (or N separate detectors) at time is given by:8,9]:
Greivenkamp introduced sub-Nyquist interferometry to measure large aspheric profiles in which the magnitude of the phase difference between adjacent pixels exceeded π radians . Provided that the phase was measured accurately, the Nyquist limit was due to the reconstruction (spatial unwrapping) algorithm. A large increase in the measurement range of the interferometer was achieved by assuming that the spatial derivative of the aspheric surface shape (i.e. its slope) was continuous. The phase gradient was used to interpolate between pixels during spatial unwrapping of the phase. By analogy, the magnitude of the phase difference between adjacent frames in high-speed interferometry can exceed π radians if the temporal derivative of the deformation (i.e. velocity) is continuous. The measured normalized velocity will be wrapped in the range , i.e. modulo . The assumption of continuous velocity can be used to add the correct multiple of to the normalized velocity at each pixel :
The assumption of continuous velocity is reasonable for many measurements involving incremental loading or vibration. A known or stationary reference is required to determine the absolute velocity, i.e. to determine in Eq. (8). The measurement limit is determined by the normalized velocity that produces a change between frames. The acceleration is given by:Eq. (9) yields an acceleration limit for unwrapping the normalized velocity of:
3. Experimental demonstration
A schematic of the experimental system is shown in Fig. 1 . The output from a diode-pumped, frequency-doubled Nd:YVO4 laser (single-frequency continuous wave output at 532 nm at power levels up to 500 mW) was divided by a polarizing beam splitter (PBS) into orthogonally linearly polarized object and reference beams. Each beam was launched into the fast axis of a highly birefringent (hi-bi) optical fibre. Light from the object fibre illuminated the test object directly or through a cylindrical lens (L1) depending on the area under test. The distal end of the object fibre was rotated by 90° about its optical axis to align its fast axis (and hence the linear polarization of the emerging light) with that of the reference fibre. The test object was a centre-pinned 14 cm diameter circular aluminium plate with retroreflective coating, driven by a piezoelectric element attached to its rear face. The object was imaged to the 1024 × 1024 pixel array of a CMOS camera (Photonfocus, MV-D1024, 8-bit resolution). Light from the reference fibre was collimated, passed through an electro-optic phase modulator (PM), and then relaunched into a hi-bi fibre to path match the object beam. Light emerging from the reference fibre was collimated (by L2) and directed to the combining beam splitter (BS) to produce an on-axis reference beam on the CMOS camera. The phase modulator was used only to calibrate the size of the spatial phase steps introduced.
Two identical binary phase gratings (G1, G2) of 40 µm pitch were used to divide the object and reference wavefronts. The gratings were designed with a phase modulation depth to suppress the zero and even diffracted orders at the wavelength of the laser . The ± 1 diffracted orders were recorded by the detector. A π/2 phase step was produced between the diffracted orders by introducing a relative lateral translation of G2 with respect to G1 of a quarter of the grating pitch. To the best of our knowledge, it is the first time binary gratings have been used to separate the images and to introduce the phase step. Polarisation [12,13], diffraction gratings , and holographic optical elements [15,16], have been proposed to introduce spatial phase steps. The implementation of  is most relevant to this paper. A pair of diffraction gratings produced three phase-shifted interferograms on a single detector and the phase step between channels was calibrated with temporal phase stepping. The system was used for transient flow visualisation with a camera operating at 82 frames per second. A binary grating has been used to separate the images for spatial phase stepping, but ancillary polarizing elements were used to introduce the phase step [17,18].
Alignment of the imaging system required the two diffracted orders of the object and reference beams to be sampled equally by the pixel array of the detector with sub-pixel correspondence, and the relative phase step between the diffracted orders to be π/2 radians at each pixel. To achieve equal sampling of the diffracted object beams, grating G1 was rotated in its plane about the observation optical axis and translated along the optical axis for fine control of the magnification. Image correlation was used to quantify alignment errors between the two images of the object, without the reference beam [13,14]. Firstly, the correlation coefficient was maximised in the vertical direction as the grating was rotated and then maximized in the horizontal direction as the grating was translated along the observation axis. Reference beam alignment was achieved in the same way, by rotation of G2 in its plane about the reference beam optical axis and translation along the optical axis.
The spatial phase step between the diffracted orders was calibrated using temporal phase stepping. With the object stationary and thermal effects reduced to a minimum by appropriate shielding around the interferometer, the phase in both diffracted orders was calculated from four consecutive phase-stepped frames using Eq. (4). A histogram of the phase difference (modulo 2π) between corresponding pixels in the diffracted orders showed a normal distribution centred on the size of the spatial phase step. G2 was translated in its plane in the horizontal direction to centre the distribution on the desired spatial phase step. Figure 2 shows the spatial phase step calculated in this way plotted against translation of the reference grating. The gradient of the best fit line is 9°/μm, corresponding to a 10 μm relative translation between G1 and G2 for a 90° phase step, i.e. a quarter of the pitch of the gratings.
A qualitative demonstration of the alignment of the diffracted orders is shown in Fig. 3 . Full-field speckle interferograms of the test object were recorded before and after an out-of-plane deformation, Figs. 3(a) and 3(b). Direct subtraction of the images produced excellent quality fringes but the relative phase step between N and is lost, Fig. 3(c). Cross-subtraction between the two diffracted orders requires sub-pixel alignment. The fringe visibility was reduced due to small errors in the alignment, but the relative phase step between the diffracted orders is seen, Fig. 3(d). Clearly fringes are not required for calculating the phase with Eqs. (4) or (7), but demonstrate qualitatively the alignment of the diffracted orders.
Dynamic measurements were recorded for the centre-clamped circular target vibrating at natural frequencies of 250 and 518 Hz. Full-field time-averaged fringe patterns were recorded at 25 fps to identify the vibration modes and regions of interest for the subsequent high-speed measurements. The cylindrical lens L1 was then inserted into the object beam to illuminate a horizontal region of interest which was interrogated at 20,000 fps. The ability to switch rapidly between regions of interest is an advantage of the CMOS detector . The vibration amplitude was varied to produce a range of nominal velocities at these frequencies, estimated from the full-field fringe patterns.
The spatial phase stepped system was initially tested for nominal velocities of and , and compared to the temporal phase step approach. Figure 4(a) shows a composite spatiotemporal speckle interferogram recorded at a nominal maximum velocity of . Each horizontal row in the figure corresponds to one frame (656 × 1 pixels) recorded at time . The two illuminated regions in each row correspond to the ± 1 diffracted orders from the gratings. For temporal phase stepping, the velocity at each pixel (column) was calculated using Eqs. (4) and (5), Fig. 4(b). The velocity for one pixel is shown in Fig. 4(c). The same data were then analysed using Eq. (7), combining the spatial phase-stepped data from the diffracted orders, Fig. 4(c). The spatial phase stepping approach worked successfully to the theoretical limit of , whilst errors in the temporal phase stepped data increased as the velocity increased, Fig. 4(d).
Figure 5 shows the velocity recorded with the spatial phase step system for nominal maximum velocities of and , Fig. 5(a) and (c). The normalized velocity was obtained modulo when was exceeded, Fig. 5(c). The normalized was unwrapped by adding the correct multiple of to the wrapped normalized velocity using Eq. (8), Figs. 5(b) and 5(d). For harmonic vibration, the sub-Nyquist acceleration limit of Eq. (10) corresponds to maximum velocity of , or . However, unwrapping was not successful above for this system, and was due to the camera exposure as discussed in the next section.
4. Error analysis
To model the effects of velocity, acceleration and camera frame rate on the measurement accuracy, a 256 × 1 pixel fringe pattern was generated using Eq. (3). The phase distribution represented a central node and two regions vibrating harmonically out-of-phase, similar to Fig. 5, and was calculated using the expression:
The velocity at each was also calculated, and used to determine the reduction in fringe visibility for an exposure of . The calculation was repeated for , i.e. the number of rows required for two vibration periods at the chosen frequency. The second term in Eq. (11) represents an arbitrary starting phase in the range 0 to 2π radians, varying linearly with position. A constant phase step of was introduced between images.
The normalized velocity was calculated from the simulated interferograms using the procedure described in Section 2. The maximum error between the calculated normalized velocity and the value used to calculate the simulated interferograms is shown in Fig. 6(a) for a range of normalized velocities. The maximum error (which occurs at the maximum velocity), rather than the rms error over the whole vibration period, was used so as to keep the results general to other types of deformation. For each in Fig. 6(a), the sub-Nyquist acceleration limit of Eq. (10) in the form is marked by a vertical line above which no further calculations were performed. The specific values in the simulation were: detector sampling period (i.e. a camera frame rate of 100,000 fps); ; and object vibration frequency . These values yield the sub-Nyquist maximum velocity limits of and at 100 and 10 respectively.
Errors in the normalized velocity arise due to non-linearity in the acceleration between frames. This nonlinearity is not significant for a large number of frames per vibration period, e.g. , and the normalized velocity error is negligible. The error increases as the number of frames per vibration period is reduced, e.g. 100 and 10. However, the errors for spatial phase stepping are an order of magnitude smaller than those associated with temporal phase stepping . The normalized jerk (rate of change of acceleration) is given by from which the maximum normalized velocity error was modelled by:Equation (13) is plotted at each of the three values of considered in Fig. 6(a) and showed good agreement with the points obtained from the simulation.
Figure 6(b) shows the maximum normalized velocity error plotted against the number of frames per vibration period, for a range of velocities. The sub-Nyquist acceleration limits of are marked by vertical lines at 31.4 and 94.2 for velocities of 5 and 15 respectively, below which no further calculations were performed. The error due to the nonlinearity in the acceleration (i.e. the jerk) decreases as the number of frames per vibration period increase and as the velocity decreases. The maximum error from the simulation and the value predicted by Eq. (13) are again in good agreement.
In practice, experimental phase errors arise in addition to those caused by the non-linearity of the acceleration. Figure 7 shows the effect of other common error sources on the maximum normalized velocity, for which is comparable to the maximum normalized velocity achieved experimentally. Errors due to the exposure, expressed as a fraction of the camera frame period , are most significant, Fig. 7(a). The error at (i.e. camera operating at 20,000 fps, object vibration frequency 250 Hz and used experimentally) is approximately . At shorter exposures, 0.01 and 0.001, the errors are equal to the limit due to jerk. For the simulation, could correspond to a 10 ns illumination pulse. Intensity noise for the camera was estimated at 2 grey levels rms. The error in the phase step was estimated at 9° rms from Fig. 2, although this is probably a significant overestimate because it includes errors due to image sub-sampling. In both cases the velocity error is below approximately , Figs. 7(b) and 7(c).
The root mean square (rms) phase error measured for a stationary surface was found to be or 6°. A similar rms difference was determined between the temporal and spatial data at in Fig. 4(c). Phase errors in the spatial phase stepped system were attributed primarily to mismatch in the sampling of the diffracted images [12,18]. At velocities above , Eq. (7) was sensitive to the reduced fringe modulation caused by the surface motion during the exposure , Fig. 7(a). The resulting error in the calculated phase prevented successful measurement above a surface velocity of , somewhat less than the limit of for the harmonic vibration that corresponds to the acceleration limit of Eq. (10). The simulation showed that reducing , by controlling the camera exposure or using pulsed laser illumination, would enable higher velocities to be measured. Reducing the exposure in high-speed sub-Nyquist interferometry in order to achieve accurate phase reconstruction is analogous to reducing the lateral extent of each pixel with a mask placed over the CCD in the presence of a high spatial phase gradient .
Reducing should enable the acceleration limit of to be reached. The acceleration limit could in turn be exceeded if a further assumption regarding continuity in the surface acceleration were made. For the general case of continuous motion with unknown initial velocity, the data could be presented as the velocity change between frames, i.e., the instrument could be considered to be a multipoint accelerometer. The measured normalized acceleration would be wrapped modulo and the assumption of continuous acceleration used to unwrap it, by analogy with Eq. (8). In principle, continuity of higher order derivatives could be assumed within experimental limits of noise. The limit to using higher order derivatives would be the velocity that produces a period of the interference signal during the exposure , i.e. , when the fringe visibility is reduced to zero.
High-speed, sub-Nyquist interferometry increases the velocity measurement range and the effective data transfer rate from the camera. The requirement for only two phase-stepped images in order to increase the effective data transfer rate with spatial phase stepping could then be relaxed in some cases. Phase step algorithms that use three or more spatially phase-stepped images are significantly less sensitive to an exposure of compared to Eq. (7). Therefore, for some measurements, a velocity measurement range that extends significantly beyond could be offset against the reduced spatial resolution of three (or more) phase stepped images, to achieve an overall increase in the effective data transfer rate from the camera. Therefore high-speed sub-Nyquist interferometry could be applied to other spatial phase-stepped interferometers that use three or more phase stepped samples .
Spatial phase stepping was successfully introduced to high-speed speckle pattern interferometry to increase the velocity measurement range. Phase stepped images were recorded to distinct regions of a single high-speed detector in order to minimize the additional hardware costs. Binary gratings were used to produce two phase stepped images, which enabled an increase in the effective data transfer rate from the camera (in pixels per second) compared to simply increasing the camera frame rate in a temporal phase stepped system. Absolute velocity measurements up to the limit were achieved. Simultaneous acquisition of the spatial phase-stepped interferograms reduces errors due to acceleration; phase errors due to the system alignment dominated.
High-speed, sub-Nyquist interferometry further increased the maximum velocity to , i.e. an order of magnitude increase with respect to temporal phase stepping, by unwrapping the normalized velocity modulo . Unwrapping assumes that the velocity is continuous, which is reasonable for many measurements of incremental loading or vibration. A known or stationary reference is required to obtain the absolute unwrapped velocity. The measurement limit is determined by an acceleration of although this wasn’t achieved in practice due to the exposure . For a reduced camera exposure or pulsed laser illumination, the acceleration limit should be achievable. The assumption of continuous higher order derivatives (e.g. acceleration, jerk) could then enable higher velocities to be measured to a maximum velocity of .
We are grateful to Ms. Rong Wang for assistance with data acquisition programming. Andrew Moore acknowledges the support of the UK Atomic Weapons Establishment (AWE) through its William Penney Fellowship scheme.
References and links
1. J. M. Kilpatrick, A. J. Moore, J. S. Barton, J. D. C. Jones, M. Reeves, and C. Buckberry, “Measurement of complex surface deformation by high-speed dynamic phase-stepped digital speckle pattern interferometry,” Opt. Lett. 25(15), 1068–1070 (2000). [CrossRef]
2. W. N. MacPherson, M. Reeves, D. P. Towers, A. J. Moore, J. D. C. Jones, M. Dale, and C. Edwards, “Multipoint laser vibrometer for modal analysis,” Appl. Opt. 46(16), 3126–3132 (2007). [CrossRef] [PubMed]
3. J. M. Huntley, G. H. Kaufmann, and D. Kerr, “Phase-shifted dynamic speckle pattern interferometry at 1 kHz,” Appl. Opt. 38(31), 6556–6563 (1999). [CrossRef]
5. P. D. Ruiz, J. M. Huntley, Y. Shen, C. R. Coggrave, and G. H. Kaufmann, “Phase errors in low-frequency vibration measurement with high-speed phase-shifting speckle pattern interferometry,” Opt. Eng. 40(9), 1984–1992 (2001). [CrossRef]
6. T. Wu, J. D. Jones, and A. J. Moore, “High-speed phase-stepped digital speckle pattern interferometry using a complementary metal-oxide semiconductor camera,” Appl. Opt. 45(23), 5845–5855 (2006). [CrossRef] [PubMed]
7. A. J. Moore, D. P. Hand, J. S. Barton, and J. D. C. Jones, “Transient deformation measurement with electronic speckle pattern interferometry and a high-speed camera,” Appl. Opt. 38(7), 1159–1162 (1999). [CrossRef]
8. A. Ettemeyer and Z. Wang, “Verfahren und Vorrichtung zur Bestimmung von Phasen und Phasendifferenzen,” Patent DE 195 13 234 (1995).
9. H. van Brug, “Temporal phase unwrapping and its application in shearography systems,” Appl. Opt. 37(28), 6701–6706 (1998). [CrossRef]
11. M. Reeves, A. J. Moore, D. P. Hand, and J. D. C. Jones, “Dynamic shape measurement system for laser materials processing,” Opt. Eng. 42(10), 2923–2929 (2003). [CrossRef]
13. A. L. Weijers, H. van Brug, and H. J. Frankena, “Polarization phase stepping with a savart element,” Appl. Opt. 37(22), 5150–5155 (1998). [CrossRef]
14. T. D. Upton and D. W. Watt, “Optical and electronic design of a calibrated multichannel electronic interferometer for quantitative flow visualization,” Appl. Opt. 34(25), 5602–5610 (1995). [CrossRef] [PubMed]
15. B. B. García, A. J. Moore, C. Pérez-López, L. Wang, and T. Tschudi, “Transient deformation measurement with electronic speckle pattern interferometry by use of a holographic optical element for spatial phase stepping,” Appl. Opt. 38(28), 5944–5947 (1999). [CrossRef]
16. B. Barrientos García, A. J. Moore, C. Perez-Lopez, L. Wang, and T. Tschudi, “Spatial phase-stepped interferometry using a holographic optical element,” Opt. Eng. 38(12), 2069–2074 (1999). [CrossRef]
17. J. Kranz, J. Lamprecht, A. Hettwer, and J. Schwider, “Fiber optical single frame speckle interferometer for measuring industrial surfaces,” Proc. SPIE 3407, 328–331 (1998). [CrossRef]
18. A. Hettwer, J. Kranz, and J. Schwider, “Three channel phase-shifting interferometer using polarization-optics and a diffraction grating,” Opt. Eng. 39(4), 960–966 (2000). [CrossRef]
19. J. E. Greivenkamp, A. E. Lowman, and R. J. Palum, “Sub-Nyquist interferometry: Implementation and measurement capability,” Opt. Eng. 35(10), 2962–2969 (1996). [CrossRef]
20. M. Novak, J. Millerd, N. Brock, M. North-Morris, J. Hayes, and J. Wyant, “Analysis of a micropolarizer array-based simultaneous phase-shifting interferometer,” Appl. Opt. 44(32), 6861–6868 (2005). [CrossRef] [PubMed]