Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed, sub-Nyquist interferometry

Open Access Open Access

Abstract

The velocity measurement limit in dynamic interferometry is vNyq, the velocity at which the interferogram is sampled at the Nyquist limit. We show that vNyq can be exceeded by assuming continuity of the surface motion and unwrapping the velocity modulo 2vNyq. The technique was demonstrated in a high-speed speckle pattern interferometer with spatial phase stepping. Surface velocities of 4vNyq were measured experimentally. With a reduced exposure, high-speed sub-Nyquist interferometry could be implemented up to a maximum acceleration of vNyq/ts, where ts is the detector frame period.

©2011 Optical Society of America

1. Introduction

Full-field, transient deformation measurements with interferometry have become routine with the availability of inexpensive high-speed detectors. The maximum velocity that can be measured and the number of measurement points are limited by the data transfer rate of the detector: as the velocity increases, an increased frame rate is required to sample the interferogram, thus reducing the number of measurement points at a given data transfer rate. In this research, spatial phase stepping was introduced to a high-speed interferometer in a way to increase both the velocity measurement range and the effective data transfer rate. By assuming that the velocity is continuous, a further increase in both quantities was achieved. These methods were implemented for vibration measurement with speckle interferometry, but are applicable to other types of transient deformation measured with other optical systems.

Vibration amplitude and relative vibration phase have been measured simultaneously at a number of separated points with high-speed speckle interferometry [1]. The system used a line-scan camera (256 pixels) operating at 100,000 frames per second (fps) and temporal phase stepping. It operated as a multipoint vibrometer that measured velocities up to 3.2 mm/s [2]. The measurement of relative vibration phase at separated points is not possible with a conventional single point laser vibrometer due to the low bandwidth scanning of the measurement point; nor is it possible with standard full-field techniques such as time-averaged speckle interferometry due to the low bandwidth of the detector. Speckle interferometry with temporal phase stepping has also been used to measure continuous surface deformations at lower speeds but at more measurement points. A maximum velocity of ~25 µm/s was measured over 239 × 192 pixels at 1,000 fps [3], and ~1 µm/s was measured over 512 × 512 pixels at 40 fps [4]. The data transfer rate from the camera was similar in each case, being 25.6, 45.9 and 10.5 Mpixels/s in [1], [3] and [4] respectively. The phase evolution with time can also be determined at each pixel independently by Fourier or wavelet analysis [4].

The maximum velocity is set by the Nyquist condition, that the intensity is sampled at least twice per interference fringe [14]:

vNyq=λ2ηts
where λ is the laser wavelength, η is the interferometer’s sensitivity to the object displacement resolved in the observation direction (η=2 for collinear illumination and observation directions) and ts is the detector frame period. A temporal phase step is equivalent to a pseudo velocity of vPS=λ/(ηtsN), where N is the number of phase steps per interference fringe period. The velocity limit corresponding to the Nyquist fringe sampling limit with temporal phase stepping is therefore [1]:
vNyq,PS=vNyqvPS=λ2ηts(12N)
which has a maximum value of 0.5vNyq forN=4, i.e. π/2 radian phase step between frames. In practice, phase step errors associated with non-linear velocity (e.g. acceleration) introduce errors in the phase calculated from the N sequential frames which become unacceptable above a maximum velocity of approximately 0.3vNyq [5,6].

Spatial phase stepping, which separates phase-stepped interferograms to one or multiple detectors, eliminates vPS and potentially enables the velocity limit to be increased to vNyq. Separating images to multiple detectors is not practical for most high-speed applications, where the additional hardware cost can quickly become prohibitive. Separating the images to distinct regions of a single detector is an alternative, although the resolution of each image is then reduced. If three (or more) phase-stepped channels are recorded on a single detector, the increase in measurement range to vNyq will be offset by the reduced number of pixels at which the measurement is made. In other words, the same increase in maximum velocity could be achieved at fewer pixels by increasing the camera frame rate by three (or more) times in a temporal phase stepped system, and so the effective data transfer rate from the camera has not been improved.

In this paper, two spatial phase stepped channels were recorded on a single high-speed detector through the use of binary phase gratings. The maximum velocity range was increased to vNyq and the effective data transfer rate from the camera was increased. It is shown in the following section that by assuming that the velocity is continuous, measurements beyond vNyq can be made and so increase further the effective data transfer rate.

2. High-speed sub-Nyquist interferometry

For an interferometer using continuous-wave illumination, the interference intensity recorded by a detector at pixel (x,y) and at time tn, integrated over the frame exposure te, is given by [7]:

In(x,y,tn)=I(1+Vsinc[ϕ2]cos[ΦOΦR])
where I(x,y) is the bias intensity and V(x,y) is the visibility of the interference which depends on the relative intensity of the two beams, the state of polarization and their coherence. For short te, the object deformation resolved in the observation direction, wn(x,y,tn), can be approximated by a linear expansion, i.e. the velocity wn is assumed to be constant. The instantaneous visibility, Vsinc[ϕ/2], depends on the relative phase change between the object and reference beam during the exposure, ϕ=(2πη/λ)wnte. Good visibility requires the argument of the sinc function to approach zero, i.e. small wn and te. The object phase, ΦO(x,y,tn), comprises the phase due to the deformation of the target, (2πη/λ)wn, and the random speckle phase in a speckle interferometer.

For temporal phase stepping [14], the reference phase, ΦR(tn), includes a phase-step of 2πn/N. For N=4, the interference phase at each pixel (x,y) can be recovered from four consecutive images by Carré’s algorithm:

ΦO(tn)ΦR(tn)=tan1[3(InIn+1)(In1In+2)][(InIn+1)+(In1In+2)](In+In+1)(In1+In+2)
It is assumed that the frame rate is sufficiently high that the velocity is approximately constant during any four consecutive frames so that the phase at a given pixel can be calculated from In1 to In+2. When the data include some known or stationary reference, recovery of the absolute spatiotemporal profile from the phase ΦO(tn) is possible. For the general case of continuous motion with unknown initial deformation, phase changes between frames represent velocity:
wn(x,y,tn)=λ2πηtsΔΦO(x,y,tn)orwn(x,y,tn)vNyq=ΔΦO(x,y,tn)π
where at a given pixel ΔΦO(tn)=ΦO(tn)ΦO(tn1)π/2 represents the change in phase due to the deformation between frames. Normalizing the velocity by the Nyquist velocity limit in Eq. (5) gives an immediate indication of the interferometer performance without referring to the particular laser wavelength and camera frame rate [6].

For spatial phase stepping, the intensity at corresponding pixels (x,y)N recorded on N separate regions of a single detector (or N separate detectors) at time tn is given by:

In,N(x,y,tn)=I(1+Vsinc[ϕ2]cos[ΦOΦR])
A constant phase step of ΦR=(N1)π/2 is introduced between each image. If N=2, the phase difference at each pixel (x,y) can be calculated using the relation [8,9]:

ΔΦO(tn)π2=2tan1In1,NIn,N1In1,N1In,N

Greivenkamp introduced sub-Nyquist interferometry to measure large aspheric profiles in which the magnitude of the phase difference between adjacent pixels exceeded π radians [10]. Provided that the phase was measured accurately, the Nyquist limit was due to the reconstruction (spatial unwrapping) algorithm. A large increase in the measurement range of the interferometer was achieved by assuming that the spatial derivative of the aspheric surface shape (i.e. its slope) was continuous. The phase gradient was used to interpolate between pixels during spatial unwrapping of the phase. By analogy, the magnitude of the phase difference between adjacent frames in high-speed interferometry can exceed π radians if the temporal derivative of the deformation (i.e. velocity) is continuous. The measured normalized velocity will be wrapped in the range ±vNyq, i.e. modulo 2vNyq. The assumption of continuous velocity can be used to add the correct multiple of 2vNyq to the normalized velocity at each pixel (x,y):

ΔΦO(tn)π¯=ΔΦO(tn)π±k2vNyq
where ΔΦO(tn)¯/π is the unwrapped normalized velocity, ΔΦO(tn)/π is the measured normalized velocity modulo 2vNyq and k is an integer.

The assumption of continuous velocity is reasonable for many measurements involving incremental loading or vibration. A known or stationary reference is required to determine the absolute velocity, i.e. to determine k=0 in Eq. (8). The measurement limit is determined by the normalized velocity that produces a vNyq change between frames. The acceleration is given by:

wn(x,y,tn)=λ2πηts2Δ2ΦO(x,y,tn)orwn(x,y,tn)vNyq/ts=Δ2ΦO(x,y,tn)π
where at a given pixel Δ2ΦO(tn)=ΔΦO(tn+1)ΔΦO(tn). SettingΔ2ΦO=π in Eq. (9) yields an acceleration limit for unwrapping the normalized velocity of:

a=vNyqts

3. Experimental demonstration

A schematic of the experimental system is shown in Fig. 1 . The output from a diode-pumped, frequency-doubled Nd:YVO4 laser (single-frequency continuous wave output at 532 nm at power levels up to 500 mW) was divided by a polarizing beam splitter (PBS) into orthogonally linearly polarized object and reference beams. Each beam was launched into the fast axis of a highly birefringent (hi-bi) optical fibre. Light from the object fibre illuminated the test object directly or through a cylindrical lens (L1) depending on the area under test. The distal end of the object fibre was rotated by 90° about its optical axis to align its fast axis (and hence the linear polarization of the emerging light) with that of the reference fibre. The test object was a centre-pinned 14 cm diameter circular aluminium plate with retroreflective coating, driven by a piezoelectric element attached to its rear face. The object was imaged to the 1024 × 1024 pixel array of a CMOS camera (Photonfocus, MV-D1024, 8-bit resolution). Light from the reference fibre was collimated, passed through an electro-optic phase modulator (PM), and then relaunched into a hi-bi fibre to path match the object beam. Light emerging from the reference fibre was collimated (by L2) and directed to the combining beam splitter (BS) to produce an on-axis reference beam on the CMOS camera. The phase modulator was used only to calibrate the size of the spatial phase steps introduced.

 figure: Fig. 1

Fig. 1 Schematic of spatial phase-stepped speckle interferometer. L, lens; A, aperture and P, polariser. Other components described in text.

Download Full Size | PDF

Two identical binary phase gratings (G1, G2) of 40 µm pitch were used to divide the object and reference wavefronts. The gratings were designed with a phase modulation depth to suppress the zero and even diffracted orders at the wavelength of the laser [11]. The ± 1 diffracted orders were recorded by the detector. A π/2 phase step was produced between the diffracted orders by introducing a relative lateral translation of G2 with respect to G1 of a quarter of the grating pitch. To the best of our knowledge, it is the first time binary gratings have been used to separate the images and to introduce the phase step. Polarisation [12,13], diffraction gratings [14], and holographic optical elements [15,16], have been proposed to introduce spatial phase steps. The implementation of [14] is most relevant to this paper. A pair of diffraction gratings produced three phase-shifted interferograms on a single detector and the phase step between channels was calibrated with temporal phase stepping. The system was used for transient flow visualisation with a camera operating at 82 frames per second. A binary grating has been used to separate the images for spatial phase stepping, but ancillary polarizing elements were used to introduce the phase step [17,18].

Alignment of the imaging system required the two diffracted orders of the object and reference beams to be sampled equally by the pixel array of the detector with sub-pixel correspondence, and the relative phase step between the diffracted orders to be π/2 radians at each pixel. To achieve equal sampling of the diffracted object beams, grating G1 was rotated in its plane about the observation optical axis and translated along the optical axis for fine control of the magnification. Image correlation was used to quantify alignment errors between the two images of the object, without the reference beam [13,14]. Firstly, the correlation coefficient was maximised in the vertical direction as the grating was rotated and then maximized in the horizontal direction as the grating was translated along the observation axis. Reference beam alignment was achieved in the same way, by rotation of G2 in its plane about the reference beam optical axis and translation along the optical axis.

The spatial phase step between the diffracted orders was calibrated using temporal phase stepping. With the object stationary and thermal effects reduced to a minimum by appropriate shielding around the interferometer, the phase in both diffracted orders was calculated from four consecutive phase-stepped frames using Eq. (4). A histogram of the phase difference (modulo 2π) between corresponding pixels in the diffracted orders showed a normal distribution centred on the size of the spatial phase step. G2 was translated in its plane in the horizontal direction to centre the distribution on the desired spatial phase step. Figure 2 shows the spatial phase step calculated in this way plotted against translation of the reference grating. The gradient of the best fit line is 9°/μm, corresponding to a 10 μm relative translation between G1 and G2 for a 90° phase step, i.e. a quarter of the pitch of the gratings.

 figure: Fig. 2

Fig. 2 Mean phase difference between corresponding pixels in the two diffracted orders plotted against lateral translation of grating G2 in its plane. Error bars show ± 3 standard deviations.

Download Full Size | PDF

A qualitative demonstration of the alignment of the diffracted orders is shown in Fig. 3 . Full-field speckle interferograms of the test object were recorded before and after an out-of-plane deformation, Figs. 3(a) and 3(b). Direct subtraction of the images produced excellent quality fringes but the relative phase step between N and N+1 is lost, Fig. 3(c). Cross-subtraction between the two diffracted orders requires sub-pixel alignment. The fringe visibility was reduced due to small errors in the alignment, but the relative phase step between the diffracted orders is seen, Fig. 3(d). Clearly fringes are not required for calculating the phase with Eqs. (4) or (7), but demonstrate qualitatively the alignment of the diffracted orders.

 figure: Fig. 3

Fig. 3 Spatial phase-stepped results for out-of-plane deformation. Speckle interferograms recorded (a) at tn-1 and (b) tn. (c) Direct subtraction of images, e.g. In,N-1 - In-1,N-1. (d) Cross-subtraction of images, e.g. In,N-1 - In-1,N.

Download Full Size | PDF

Dynamic measurements were recorded for the centre-clamped circular target vibrating at natural frequencies of 250 and 518 Hz. Full-field time-averaged fringe patterns were recorded at 25 fps to identify the vibration modes and regions of interest for the subsequent high-speed measurements. The cylindrical lens L1 was then inserted into the object beam to illuminate a horizontal region of interest which was interrogated at 20,000 fps. The ability to switch rapidly between regions of interest is an advantage of the CMOS detector [6]. The vibration amplitude was varied to produce a range of nominal velocities at these frequencies, estimated from the full-field fringe patterns.

The spatial phase stepped system was initially tested for nominal velocities of 0.1vNyq and 0.5vNyq, and compared to the temporal phase step approach. Figure 4(a) shows a composite spatiotemporal speckle interferogram recorded at a nominal maximum velocity of 0.1vNyq. Each horizontal row in the figure corresponds to one frame (656 × 1 pixels) recorded at time tn. The two illuminated regions in each row correspond to the ± 1 diffracted orders from the gratings. For temporal phase stepping, the velocity at each pixel (column) was calculated using Eqs. (4) and (5), Fig. 4(b). The velocity for one pixel is shown in Fig. 4(c). The same data were then analysed using Eq. (7), combining the spatial phase-stepped data from the diffracted orders, Fig. 4(c). The spatial phase stepping approach worked successfully to the theoretical limit of 0.5vNyq, whilst errors in the temporal phase stepped data increased as the velocity increased, Fig. 4(d).

 figure: Fig. 4

Fig. 4 (a) Spatiotemporal speckle interferogram for a horizontal region of interest recorded at 20,000 fps for object vibrating harmonically at 518 Hz with nominal maximum velocity 0.1vNyq. (b) Normalized velocity, ΔΦ/π, calculated from temporal phase stepped data. Comparison of normalized velocity for a single column calculated from temporal and spatial phase stepped data at (c) 0.1vNyq and (d) 0.5vNyq.

Download Full Size | PDF

Figure 5 shows the velocity recorded with the spatial phase step system for nominal maximum velocities of 0.5vNyq and 4vNyq, Fig. 5(a) and (c). The normalized velocity was obtained modulo 2vNyq when vNyq was exceeded, Fig. 5(c). The normalized was unwrapped by adding the correct multiple of 2vNyq to the wrapped normalized velocity using Eq. (8), Figs. 5(b) and 5(d). For harmonic vibration, the sub-Nyquist acceleration limit of Eq. (10) corresponds to maximum velocity of v=vNyq/(2πfts), or 12.7vNyq. However, unwrapping was not successful above 4vNyq for this system, and was due to the camera exposure as discussed in the next section.

 figure: Fig. 5

Fig. 5 Normalized velocity, ΔΦ/π, calculated from spatial phase stepped data at nominal maximum velocities of (a) 0.5vNyq and (c) 4vNyq. Images recorded at 20,000 fps for object vibrating harmonically at 250 Hz. Unwrapped normalized velocity for one pixel (column) is shown in (b) and (d).

Download Full Size | PDF

4. Error analysis

To model the effects of velocity, acceleration and camera frame rate on the measurement accuracy, a 256 × 1 pixel fringe pattern was generated using Eq. (3). The phase distribution represented a central node and two regions vibrating harmonically out-of-phase, similar to Fig. 5, and was calculated using the expression:

ΦO(x,tn)=2πηλw0sin(2πftn)sin(2πx)+2πx
for 0<x<1, I = 127.5 and V = 1. The vibration amplitude w0 was defined in terms of the maximum normalized velocity for a given interferometer arrangement (λ, η and ts, which define vNyq) and object vibration frequency (f ):

w0=(vvNyq)vNyq2πf

The velocity at each tn was also calculated, and used to determine the reduction in fringe visibility for an exposure of te/ts=0.001. The calculation was repeated for 0<tn<2/(fts), i.e. the number of rows required for two vibration periods at the chosen frequency. The second term in Eq. (11) represents an arbitrary starting phase in the range 0 to 2π radians, varying linearly with position. A constant phase step of ΦR=(N1)π/2 was introduced between N=2 images.

The normalized velocity was calculated from the simulated interferograms using the procedure described in Section 2. The maximum error between the calculated normalized velocity and the value used to calculate the simulated interferograms is shown in Fig. 6(a) for a range of normalized velocities. The maximum error (which occurs at the maximum velocity), rather than the rms error over the whole vibration period, was used so as to keep the results general to other types of deformation. For each 1/(fts) in Fig. 6(a), the sub-Nyquist acceleration limit of Eq. (10) in the form v=vNyq/(2πfts) is marked by a vertical line above which no further calculations were performed. The specific values in the simulation were: detector sampling period ts=10μs (i.e. a camera frame rate of 100,000 fps); λ=532nm; η=2 and object vibration frequency f=1kHz. These values yield the sub-Nyquist maximum velocity limits of 15.9vNyq and 1.6vNyq at 1/(fts)=100 and 10 respectively.

 figure: Fig. 6

Fig. 6 Maximum normalized velocity error plotted against (a) normalized velocity and (b) number of frames per vibration period.

Download Full Size | PDF

Errors in the normalized velocity arise due to non-linearity in the acceleration between frames. This nonlinearity is not significant for a large number of frames per vibration period, e.g. 1/(fts)=1000, and the normalized velocity error is negligible. The error increases as the number of frames per vibration period is reduced, e.g. 1/(fts)= 100 and 10. However, the errors for spatial phase stepping are an order of magnitude smaller than those associated with temporal phase stepping [6]. The normalized jerk (rate of change of acceleration) is given by wn(x,y,tn)/(vNyq/ts2) from which the maximum normalized velocity error was modelled by:

125(2πfts)2(vvNyq)
where the factor 1/25 was determined empirically. Equation (13) is plotted at each of the three values of 1/(fts) considered in Fig. 6(a) and showed good agreement with the points obtained from the simulation.

Figure 6(b) shows the maximum normalized velocity error plotted against the number of frames per vibration period, for a range of velocities. The sub-Nyquist acceleration limits of v=vNyq/(2πfts) are marked by vertical lines at 1/(fts)=31.4 and 94.2 for velocities of vmax/vNyq=5 and 15 respectively, below which no further calculations were performed. The error due to the nonlinearity in the acceleration (i.e. the jerk) decreases as the number of frames per vibration period increase and as the velocity decreases. The maximum error from the simulation and the value predicted by Eq. (13) are again in good agreement.

In practice, experimental phase errors arise in addition to those caused by the non-linearity of the acceleration. Figure 7 shows the effect of other common error sources on the maximum normalized velocity, for v/vNyq=4 which is comparable to the maximum normalized velocity achieved experimentally. Errors due to the exposure, expressed as a fraction of the camera frame period te/ts, are most significant, Fig. 7(a). The error at 1/(fts)=80 (i.e. camera operating at 20,000 fps, object vibration frequency 250 Hz and te/ts=0.1 used experimentally) is approximately 0.3vNyq. At shorter exposures, te/ts=0.01 and 0.001, the errors are equal to the limit due to jerk. For the simulation, te/ts=0.001 could correspond to a 10 ns illumination pulse. Intensity noise for the camera was estimated at 2 grey levels rms. The error in the phase step was estimated at 9° rms from Fig. 2, although this is probably a significant overestimate because it includes errors due to image sub-sampling. In both cases the velocity error is below approximately 0.1vNyq, Figs. 7(b) and 7(c).

 figure: Fig. 7

Fig. 7 Maximum normalized velocity error plotted against number of frames per vibration period, for v/vNyq = 4. (a) Influence of exposure time. (b) Influence of zero-mean, Gaussian-distributed intensity noise. (c) Influence of phase step error.

Download Full Size | PDF

5. Discussion

The root mean square (rms) phase error measured for a stationary surface was found to be 0.03vNyq or 6°. A similar rms difference was determined between the temporal and spatial data at v/vNyq=0.1 in Fig. 4(c). Phase errors in the spatial phase stepped system were attributed primarily to mismatch in the sampling of the diffracted images [12,18]. At velocities above vNyq, Eq. (7) was sensitive to the reduced fringe modulation caused by the surface motion during the exposure te/ts=0.1, Fig. 7(a). The resulting error in the calculated phase prevented successful measurement above a surface velocity of 4vNyq, somewhat less than the limit of 12.7vNyq for the harmonic vibration that corresponds to the acceleration limit of Eq. (10). The simulation showed that reducing te/ts, by controlling the camera exposure or using pulsed laser illumination, would enable higher velocities to be measured. Reducing the exposure in high-speed sub-Nyquist interferometry in order to achieve accurate phase reconstruction is analogous to reducing the lateral extent of each pixel with a mask placed over the CCD in the presence of a high spatial phase gradient [19].

Reducing te/ts should enable the acceleration limit of vNyq/ts to be reached. The acceleration limit could in turn be exceeded if a further assumption regarding continuity in the surface acceleration were made. For the general case of continuous motion with unknown initial velocity, the data could be presented as the velocity change between frames, i.e., the instrument could be considered to be a multipoint accelerometer. The measured normalized acceleration would be wrapped modulo 2vNyq/ts and the assumption of continuous acceleration used to unwrap it, by analogy with Eq. (8). In principle, continuity of higher order derivatives could be assumed within experimental limits of noise. The limit to using higher order derivatives would be the velocity that produces a period of the interference signal during the exposure te, i.e. 2(ts/te)vNyq, when the fringe visibility is reduced to zero.

High-speed, sub-Nyquist interferometry increases the velocity measurement range and the effective data transfer rate from the camera. The requirement for only two phase-stepped images in order to increase the effective data transfer rate with spatial phase stepping could then be relaxed in some cases. Phase step algorithms that use three or more spatially phase-stepped images are significantly less sensitive to an exposure of te/ts=0.1 compared to Eq. (7). Therefore, for some measurements, a velocity measurement range that extends significantly beyond vNyq could be offset against the reduced spatial resolution of three (or more) phase stepped images, to achieve an overall increase in the effective data transfer rate from the camera. Therefore high-speed sub-Nyquist interferometry could be applied to other spatial phase-stepped interferometers that use three or more phase stepped samples [20].

6. Conclusion

Spatial phase stepping was successfully introduced to high-speed speckle pattern interferometry to increase the velocity measurement range. Phase stepped images were recorded to distinct regions of a single high-speed detector in order to minimize the additional hardware costs. Binary gratings were used to produce two phase stepped images, which enabled an increase in the effective data transfer rate from the camera (in pixels per second) compared to simply increasing the camera frame rate in a temporal phase stepped system. Absolute velocity measurements up to the limit vNyq were achieved. Simultaneous acquisition of the spatial phase-stepped interferograms reduces errors due to acceleration; phase errors due to the system alignment dominated.

High-speed, sub-Nyquist interferometry further increased the maximum velocity to 4vNyq, i.e. an order of magnitude increase with respect to temporal phase stepping, by unwrapping the normalized velocity modulo 2vNyq. Unwrapping assumes that the velocity is continuous, which is reasonable for many measurements of incremental loading or vibration. A known or stationary reference is required to obtain the absolute unwrapped velocity. The measurement limit is determined by an acceleration of vNyq/ts although this wasn’t achieved in practice due to the exposure te/ts=0.1. For a reduced camera exposure or pulsed laser illumination, the acceleration limit should be achievable. The assumption of continuous higher order derivatives (e.g. acceleration, jerk) could then enable higher velocities to be measured to a maximum velocity of 2(ts/te)vNyq.

Acknowledgments

We are grateful to Ms. Rong Wang for assistance with data acquisition programming. Andrew Moore acknowledges the support of the UK Atomic Weapons Establishment (AWE) through its William Penney Fellowship scheme.

References and links

1. J. M. Kilpatrick, A. J. Moore, J. S. Barton, J. D. C. Jones, M. Reeves, and C. Buckberry, “Measurement of complex surface deformation by high-speed dynamic phase-stepped digital speckle pattern interferometry,” Opt. Lett. 25(15), 1068–1070 (2000). [CrossRef]  

2. W. N. MacPherson, M. Reeves, D. P. Towers, A. J. Moore, J. D. C. Jones, M. Dale, and C. Edwards, “Multipoint laser vibrometer for modal analysis,” Appl. Opt. 46(16), 3126–3132 (2007). [CrossRef]   [PubMed]  

3. J. M. Huntley, G. H. Kaufmann, and D. Kerr, “Phase-shifted dynamic speckle pattern interferometry at 1 kHz,” Appl. Opt. 38(31), 6556–6563 (1999). [CrossRef]  

4. X. Colonna de Lega and P. Jacquot, “Deformation measurement with object-induced dynamic phase shifting,” Appl. Opt. 35(25), 5115–5121 (1996). [CrossRef]   [PubMed]  

5. P. D. Ruiz, J. M. Huntley, Y. Shen, C. R. Coggrave, and G. H. Kaufmann, “Phase errors in low-frequency vibration measurement with high-speed phase-shifting speckle pattern interferometry,” Opt. Eng. 40(9), 1984–1992 (2001). [CrossRef]  

6. T. Wu, J. D. Jones, and A. J. Moore, “High-speed phase-stepped digital speckle pattern interferometry using a complementary metal-oxide semiconductor camera,” Appl. Opt. 45(23), 5845–5855 (2006). [CrossRef]   [PubMed]  

7. A. J. Moore, D. P. Hand, J. S. Barton, and J. D. C. Jones, “Transient deformation measurement with electronic speckle pattern interferometry and a high-speed camera,” Appl. Opt. 38(7), 1159–1162 (1999). [CrossRef]  

8. A. Ettemeyer and Z. Wang, “Verfahren und Vorrichtung zur Bestimmung von Phasen und Phasendifferenzen,” Patent DE 195 13 234 (1995).

9. H. van Brug, “Temporal phase unwrapping and its application in shearography systems,” Appl. Opt. 37(28), 6701–6706 (1998). [CrossRef]  

10. J. E. Greivenkamp, “Sub-Nyquist interferometry,” Appl. Opt. 26(24), 5245–5258 (1987). [CrossRef]   [PubMed]  

11. M. Reeves, A. J. Moore, D. P. Hand, and J. D. C. Jones, “Dynamic shape measurement system for laser materials processing,” Opt. Eng. 42(10), 2923–2929 (2003). [CrossRef]  

12. A. J. P. Haasteren and H. J. Frankena, “Real-time displacement measurement using a multicamera phase-stepping speckle interferometer,” Appl. Opt. 33(19), 4137–4142 (1994). [CrossRef]   [PubMed]  

13. A. L. Weijers, H. van Brug, and H. J. Frankena, “Polarization phase stepping with a savart element,” Appl. Opt. 37(22), 5150–5155 (1998). [CrossRef]  

14. T. D. Upton and D. W. Watt, “Optical and electronic design of a calibrated multichannel electronic interferometer for quantitative flow visualization,” Appl. Opt. 34(25), 5602–5610 (1995). [CrossRef]   [PubMed]  

15. B. B. García, A. J. Moore, C. Pérez-López, L. Wang, and T. Tschudi, “Transient deformation measurement with electronic speckle pattern interferometry by use of a holographic optical element for spatial phase stepping,” Appl. Opt. 38(28), 5944–5947 (1999). [CrossRef]  

16. B. Barrientos García, A. J. Moore, C. Perez-Lopez, L. Wang, and T. Tschudi, “Spatial phase-stepped interferometry using a holographic optical element,” Opt. Eng. 38(12), 2069–2074 (1999). [CrossRef]  

17. J. Kranz, J. Lamprecht, A. Hettwer, and J. Schwider, “Fiber optical single frame speckle interferometer for measuring industrial surfaces,” Proc. SPIE 3407, 328–331 (1998). [CrossRef]  

18. A. Hettwer, J. Kranz, and J. Schwider, “Three channel phase-shifting interferometer using polarization-optics and a diffraction grating,” Opt. Eng. 39(4), 960–966 (2000). [CrossRef]  

19. J. E. Greivenkamp, A. E. Lowman, and R. J. Palum, “Sub-Nyquist interferometry: Implementation and measurement capability,” Opt. Eng. 35(10), 2962–2969 (1996). [CrossRef]  

20. M. Novak, J. Millerd, N. Brock, M. North-Morris, J. Hayes, and J. Wyant, “Analysis of a micropolarizer array-based simultaneous phase-shifting interferometer,” Appl. Opt. 44(32), 6861–6868 (2005). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic of spatial phase-stepped speckle interferometer. L, lens; A, aperture and P, polariser. Other components described in text.
Fig. 2
Fig. 2 Mean phase difference between corresponding pixels in the two diffracted orders plotted against lateral translation of grating G2 in its plane. Error bars show ± 3 standard deviations.
Fig. 3
Fig. 3 Spatial phase-stepped results for out-of-plane deformation. Speckle interferograms recorded (a) at tn-1 and (b) tn. (c) Direct subtraction of images, e.g. In,N-1 - In-1,N-1. (d) Cross-subtraction of images, e.g. In,N-1 - In-1,N.
Fig. 4
Fig. 4 (a) Spatiotemporal speckle interferogram for a horizontal region of interest recorded at 20,000 fps for object vibrating harmonically at 518 Hz with nominal maximum velocity 0.1vNyq . (b) Normalized velocity, ΔΦ/π, calculated from temporal phase stepped data. Comparison of normalized velocity for a single column calculated from temporal and spatial phase stepped data at (c) 0.1vNyq and (d) 0.5vNyq .
Fig. 5
Fig. 5 Normalized velocity, ΔΦ/π, calculated from spatial phase stepped data at nominal maximum velocities of (a) 0.5 v N y q and (c) 4 v N y q . Images recorded at 20,000 fps for object vibrating harmonically at 250 Hz. Unwrapped normalized velocity for one pixel (column) is shown in (b) and (d).
Fig. 6
Fig. 6 Maximum normalized velocity error plotted against (a) normalized velocity and (b) number of frames per vibration period.
Fig. 7
Fig. 7 Maximum normalized velocity error plotted against number of frames per vibration period, for v/vNyq = 4. (a) Influence of exposure time. (b) Influence of zero-mean, Gaussian-distributed intensity noise. (c) Influence of phase step error.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

v N y q = λ 2 η t s
v N y q , P S = v N y q v P S = λ 2 η t s ( 1 2 N )
I n ( x , y , t n ) = I ( 1 + V sinc [ ϕ 2 ] cos [ Φ O Φ R ] )
Φ O ( t n ) Φ R ( t n ) = tan 1 [ 3 ( I n I n + 1 ) ( I n 1 I n + 2 ) ] [ ( I n I n + 1 ) + ( I n 1 I n + 2 ) ] ( I n + I n + 1 ) ( I n 1 + I n + 2 )
w n ( x , y , t n ) = λ 2 π η t s Δ Φ O ( x , y , t n ) o r w n ( x , y , t n ) v N y q = Δ Φ O ( x , y , t n ) π
I n , N ( x , y , t n ) = I ( 1 + V sinc [ ϕ 2 ] cos [ Φ O Φ R ] )
Δ Φ O ( t n ) π 2 = 2 tan 1 I n 1 , N I n , N 1 I n 1 , N 1 I n , N
Δ Φ O ( t n ) π ¯ = Δ Φ O ( t n ) π ± k 2 v N y q
w n ( x , y , t n ) = λ 2 π η t s 2 Δ 2 Φ O ( x , y , t n ) o r w n ( x , y , t n ) v N y q / t s = Δ 2 Φ O ( x , y , t n ) π
a = v N y q t s
Φ O ( x , t n ) = 2 π η λ w 0 sin ( 2 π f t n ) sin ( 2 π x ) + 2 π x
w 0 = ( v v N y q ) v N y q 2 π f
1 25 ( 2 π f t s ) 2 ( v v N y q )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.