Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Camera-based speckle noise reduction for 3-D absolute shape measurements

Open Access Open Access

Abstract

Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore.

© 2016 Optical Society of America

1. Introduction

Absolute shape measurements of workpieces are important for process monitoring and process control at working lathes. Currently, as the state of the art, coordinate measurement machines (CMM) allow absolute shape measurements with submicron precision [1]. However, the measurement process of the workpiece is slow compared to the workpiece processing time in the lathes, due to the time required for setting up the measurement and due to the tactile nature of conventional CMMs. Furthermore, the measurement is usually performed ex-situ and after the processing. This means that a immediate control of the workpiece processing is difficult to achieve. For this reason, an absolute shape measurement is required inside of the lathe.

Optical measurement techniques such as triangulation [2], low coherence interferometry [3], conoscopic holography [4], absolute distance interferometry [5], laser tracker [6], mode-locking external cavity laser sensor [7], dispersive white-light interferometry [8] and wavelength scanning heterodyne interferometry [9] can enable high measurement rates and submicron distance uncertainties. Thus, they can be employed to measure the surface profile of the workpiece, either by scanning or by employing the inherent rotation of the workpiece inside the lathe. However, at high surface or scanning velocities the measurement uncertainty of these techniques increases, due to the decreasing averaging time and the speckle effect, which occurs at rough surfaces. Moreover, all these techniques offer only one measurand, the distance. As a result, the absolute diameter of the work piece is missing.

One approach to measure the diameter of the workpiece is to measure additionally the tangential velocity of the surface during rotation. For instance, the laser Doppler distance sensor with frequency evaluation (LDD sensor) and the laser Doppler distance sensor with phase evaluation (P-LDD sensor) enable simultaneous velocity and distance measurements [10] and are well equipped to measure at fast moving rough surfaces as well [11]. For these reasons, they enable an absolute, in-situ shape measurement in lathe measurements [12,13]. The measurement principles are based on the evaluation of the Doppler frequencies of the scattered light signals and thereby on speckle [14]. For the conventional setup, the scattered light is detected with photo detectors. Therefore, several speckles oscillating with equal Doppler frequency but random phases are superposed on the detector [15]. This results in an increased surface velocity uncertainty and therefore in an increased uncertainty of the measured shape.

The aim of this paper is to introduce a camera-based scattered light detection to separate the different speckles and to evaluate the speckles separately and thereby reducing the measurement uncertainty. For this purpose, first the measurement principle of the P-LDD sensor is described and the measurement uncertainty budget for an absolute shape measurement is derived in Sec. 2. Since the speckle related uncertainty of the Doppler frequency is dominant, the novel camera-based method and the respective image processing for the reduction of this measurement uncertainty is proposed in Sec. 3. A numerical simulation is presented in order to model the measurement system and to characterise the reduction of the measurement uncertainty by employing a camera-based signal detection and evaluation. Finally, the experimental setup and experimental results are presented to validate the numerical results in Sec. 4.

2. P-LDD sensor for 3-D absolute shape measurement

As shown in Fig. 1, the P-LDD sensor (as well as the LDD sensor) is based on a Mach-Zehnder velocity interferometer [16,17] using interference fringe patterns. It is easily described for single scattering particles passing the measurement volume with a certain velocity. The amplitude of the scattered light is then modulated with the Doppler frequency fD [18]. With the interference fringe distance d, the particle velocity v perpendicular to the fringe system in the measurement volume reads

v=fDd.
Determining the fringe distance by a calibration and evaluating the Doppler frequency thus allow a velocity measurement. In order to measure the axial position z simultaneously, two mutually tilted interference fringe systems with equal fringe spacings and the tilting angle ψ are superposed. This leads to a position dependent lateral offset between both fringe systems, resulting in a position and velocity dependent delay τ of the two scattered light signals. Note that the two light signals can be distinguished by wavelength multiplexing. In order to eliminate the velocity dependency, the position dependent phase φ = 2πτ/fD is determined to calculate
z=φs1,
where the s is the slope of the calibration function φ(z).

 figure: Fig. 1

Fig. 1 Principle of P-LDD sensor: superposition of two interference fringe systems with constant and equal fringe spacing d, which are tilted towards each other by an angle ψ. The scattered light from each fringe system is detected with a photo detector (wavelength multiplexing). On the right, one fringe system in the x-y-plane in the center of the measurement volume is depicted.

Download Full Size | PDF

By evaluating the surface distance and the surface velocity of a rotating workpiece, the absolute shape can finally be evaluated by [12]

r(α)=R+Δr(α)=v¯ω(z¯z(α)).
The mean radius R of the workpiece can be estimated by the mean value of the measured surface velocity and the angular velocity ω, which is constant and known. The angle dependent deviation Δr(α) of the workpiece radius from the mean radius is obtained from the difference of the mean value and the angle resolved distance z(α). While the rotation of the workpiece in the lathe enables the angle resolved radius measurement (2-D shape), an additional feed forward of the sensor along the y direction achieves an absolute 3-D shape measurement, cf. Fig. 1.

According to Eq. (3) the shape measurement results from the simultaneous measurement of the surface velocity v and surface distance z. Therefore, the shape uncertainty results from the measurement uncertainties σv, σz of the velocity and the distance, respectively,

σv=σv,noise2+σv,speckle2+(vσdd)2,σz=σz,noise2+σz,speckle2.
The velocity uncertainty and the distance uncertainty are both influenced by the noise originating from the photo detection (σnoise) [19]. An additional uncertainty (σspeckle) results from the speckle effect [20], because multiple speckles with equal modulation frequency (Doppler frequency fD) but random phase are interfering on the detector surface [19,21], which results in phase jumps in the scattered light signal, cf. Fig. 2. Furthermore, the surface velocity uncertainty depends on the relative uncertainty of the fringe spacing σd/d, which occurs due to temperature drifts of laser diode, as well. From Eq. (3) and Eq. (4), the measurement uncertainty budget of the shape reads
σr=R((σv,noisevN)2+(σv,specklevM)2+(σωω)2+(σdd)2)+σz,noise2N/M+σz,speckle2,
where the uncertainty due to noise and speckles can be reduced by averaging the results over N total measurement points and M independent surface segments, respectively. The different contributors and their resulting shape uncertainties have been derived experimentally and are listed in Table 1.

 figure: Fig. 2

Fig. 2 (a) 5 separated speckle signals with the same Doppler frequency but with random amplitudes and arrival times and (b) the corresponding surface signal with the superposed speckle signals. The superposition results in random phase jumps due to the random arrival times, which result in the fluctuations in (c), and distorts the amplitude-frequency spectra (d).

Download Full Size | PDF

Tables Icon

Table 1. Measurement uncertainty budget for a 3-D shape measurement containing the most significant contributors. The numerical values are derived experimentally. The uncertainties are calculated for a representative 3-D shape measurement with stepwise scanning along the height of the workpiece with M = 104 (102 measurement points per revolution ×102 scanning steps along the y direction, cf. Fig. 1), N = 105 (M × 10 revolutions per scanning step), the mean radius of the workpiece R = 20 mm and a temperature variation of ΔT = 1K.

According to Table 1, the total measurement uncertainty for an absolute shape measurement results to σr ≈ 1 μm and is currently limited by the relative velocity uncertainty due to the speckle effect, i.e. Rσv,specklevM=0.8μm. Therefore, the relative velocity uncertainty σv,speckle/v has to be decreased in order to reduce the shape uncertainty σr.

3. Signal model and speckle uncertainty reduction

In order to reduce σv,speckle/v, the origin of σv,speckle/v is explained first by means of a simplified model. According to the nature of a non-transparent, optically rough, solid object, the detected scattered signal is modelled as a random sequence of successive overlapped speckle signals moving with the constant group velocity v [20]. The undisturbed detector signal of a single speckle that crosses the measurement volume perpendicular to the interference fringes is described by [10]

b(n)=Ae2fD2(n/fsta)2tw2cos(2πfD(n/fsta)),n=0,1,,N1,
with the amplitude A, the sampling frequency fs and the total number of samples N. The arrival time ta is the time when the speckle crosses the center of the measurement volume. The temporal 1/e2-width tw of the speckle signal is the number of interference fringes in the measurement volume divided by and the Doppler frequency. For surfaces as scattering objects, several of these speckle signals with the same Doppler frequency but random amplitudes and arrival times are detected simultaneously. The detector signal from K speckles reads
s(n)=k=1Kbk(n),n=0,1,,N1.

In Fig. 2 one example of the simplified signal model is depicted. First K = 5 modulated speckle signals are generated with random amplitudes and arrival times, but with constant Doppler frequency, cf. Fig. 2(a). These speckle signals are superposed on the photo detector, cf. Fig. 2(b), which results in random phase jumps of the accumulated signal. The phase jumps lead to fluctuations of the time resolved frequency, see Fig. 2(c). The most common technique for calculating the Doppler frequency is to calculate the amplitude spectrum of the detector signal by using the Fast Fourier Transform and to perform a Gaussian fitting for estimating the Doppler frequency at the maximum amplitude. Due to the superposition of the speckle signals with a different phase, however, the amplitude spectrum becomes distorted. Figure 2(d) shows the amplitude spectrum S(f) = |ℱ(s(n))| of the detector signal with the superposed speckle signals together with the amplitude spectrum B(f)=k=1K|(bk(n))| of the summated amplitude spectra of the single-speckle signals. The distortion of the amplitude spectrum S(f) results in an unknown measurement deviation with the relative standard uncertainty σv,speckle/v ≈ 4 × 10−3, which is obtained from experiments. The distortion depends on the different arrival times ti and the different amplitudes of the single speckle signals [21] and, thus, on the speckle effect and the surface micro geometry. Therefore σv,speckle/v cannot be reduced by repetitive measurements of the same surface.

In order to reduce the measurement uncertainty, the speckle signals should be processed and evaluated separately, cf. Fig. 2(d) (green curve). In fluid measurements a temporal separation based on Hilbert transform [21], Empirical Mode Decomposition [22], or Wavelet transform [23] can be applied to separate consecutive burst signals. These approaches are not feasible for surface measurements, because several speckles occur simultaneously.

For investigating the novel camera-based approach, a more detailed numerical simulation is performed. For this purpose, random surfaces with the size 1 × 0.1 mm2, 0.1 μm x- and y-resolution and normal distributed hight profiles h(x, y) are generated as measurement objects. As illumination, a pair of laser beams with λ = 685 nm, plane wave fronts and Gaussian beam profiles (1/e2-width = 50 μm) are superposed according to the measurement arrangement depicted in Fig. 1. The obtained scalar optical field of the resulting interference fringe system is denoted by EI(x, y). The field distribution Eo(x, y) of the scattered light at z0 is then calculated by [19]

Eo(x,y)=EI(x,y)exp{i2π/λ2h(x,y)}.
The scattered light is detected through the Keplerian telescope and a mirror, cf. Fig. 1. Hence, the scattered light field in the detector plane results from Fourier optics by calculating [19]
Ed(x¯,y¯)=2D1{2D{Eo(x,y)}H(fx,fy)},
whereas H(fx, fy) represents the optical transfer function of the mirror. The entire camera signal s(, ȳ, n), which is depicted in Fig. 3 for different time steps n, is calculated by shifting the surface stepwise in the x̄-direction and repeating the above calculations. The camera images consists of several speckle signals, which are modulated with an equal Doppler frequency, and the speckles move along the direction of the object movement.

 figure: Fig. 3

Fig. 3 Simulation of the scattered light intensity images for a P-LDD sensor at four consecutive time steps. The step size is one half of the fringe spacing. Therefore the images for n = 1 and n = 3 look almost identical but slightly shifted, since the surface movement equals exactly one fringe spacing. The same holds for the images n = 2 and n = 4. However, the images for n = 1 and n = 2 are totally different, because complete different surface positions are illuminated. As an example, note the fluctuation of the intensity of a dominant speckle (red box), which occurs with the Doppler frequency. Note further that only sections of the calculated intensity field of 200 × 200 are presented and the grayscale is inverted for a distinct show.

Download Full Size | PDF

When using a camera, the speckles can be separated in space domain. However, the speckles are moving and, thus, the time series of the signal from a single camera pixels will not show the expected signal modulation with the Doppler frequency. Since the speckles move in x̄-direction, the pixel signals are integrated along each row according to

Srow(y¯,n)=j=1Ncs(x¯j,y¯,n),
where Nc is the number of camera columns. The resulting time series for the different camera rows are shown in Fig. 4(a). For comparison, the signals obtained with a photo detector is shown in Fig. 4(b), which contains strong variations of the modulation magnitude and phase jumps.

 figure: Fig. 4

Fig. 4 Simulation of the detected signals for (a) the camera-based measurement approach and (b) when using a photo detector. It can be seen in (a) that the phase of signal over time is different for the different camera rows.

Download Full Size | PDF

Using Fast Fourier Transform, the amplitude spectrum was determined for every camera row. After averaging all amplitude spectra, the Doppler frequency of the signal was obtained by applying a Gaussian fitting algorithm around the maximum. As shown in Fig. 5, the distortion of the spectra by employing a single photo detector (blue curve) is clearly visible, and the center frequency greatly deviates from the true value of the Doppler frequency fD. In contrast, the amplitude spectrum for the camera-based approach (green curve) has significantly less distortions as expected, and the center frequency is much closer to the Doppler frequency fD. However, the amplitude spectrum is still noisy, because the complete camera rows were integrated, which contain multiple speckles. As a consequence, subdividing each row into several sections and moving these sections with the motion of the surface (speckle tracking) will allow to separate the single speckles and, thus, to reduce the uncertainty further. However, this is not required here and therefore not implemented.

 figure: Fig. 5

Fig. 5 Determined amplitude spectrum with the camera-based approach (green curve), where the amplitude spectra are determined for each (summed up) camera row and then averaged. For comparison, the formerly resulting amplitude spectrum with a single photo detector is shown (blue curve), i.e. when calculating the amplitude spectrum for the integrated image.

Download Full Size | PDF

In order to compare the relative velocity uncertainty for the measurement approaches with a camera and a photo detector and in order to investigate the influence of the number of accumulated camera rows on the velocity uncertainty, the scattered light fields from 10 random surfaces are evaluated with different numbers of accumulated camera rows. One accumulated camera row is designated as a camera line. The relative velocity uncertainty σv,speckle/v is depicted in Fig. 6 over the line width and the number of lines. Note that the speckle size is set to the size of 5 camera pixels.

 figure: Fig. 6

Fig. 6 The reduction of the relative surface velocity due to speckle. (a) shows the effect of the line width on the relative surface velocity due to speckle in the case of one single line. (b) shows the influence of the different number of lines on the relative surface velocity due to speckle.

Download Full Size | PDF

As shown in Fig. 6(a), when the number of lines is one, σv,speckle/v is almost independent of the line width Wline. In contrast, when the line width is decreasing while the number of lines Nlines=Nr,totalWline is increasing and the total number Nr,total of camera rows is constant, σv,speckle/v decreases with 1/Nlines, cf. Fig. 6(b). However, if the line width becomes smaller than the speckle size, σv,speckle/v converges towards a constant. This is due to the fact that even a single line still contains multiple speckles. When using all available camera rows as lines, σv,speckle/v is significantly reduced to 0.4 × 10−3 by a factor of 10.

As a result, the simulation verifies the function of the camera-based measurement approach for the suppression of speckle noise. In comparison with the single pixel evaluation by a photo detector, the camera-based speckle separation method is found to reduce σv,speckle/v by the order of one magnitude.

4. Experimental results

For validating the simulation results a LDD sensor with one camera is used, cf. Fig. 7. The camera type is the Basler piA640-210gm with a maximum frame rate of 210 Hz, a resolution of 648 px × 488 px, and a photoelement size of 7.4 μm × 7.4 μm. The light is scattered at a metallic specimen with a plane, rough surface, which is mounted on a moving stage. The stage is moved in steps of 1 μm. The controlling of the stage and the data processing is implemented by using the software MATLAB.

 figure: Fig. 7

Fig. 7 Camera-based P-LDD sensor setup.

Download Full Size | PDF

In the first experiment, the velocity of the moving stage, the total displacement and the sampling frame rate of camera amount to 200 μm/s, 1 mm and 200 Hz, respectively. The measured amplitude spectrum as a result of the camera-based measurement approach (green curve) is shown in Fig. 8 together with the result for a single photo detector (integration over all camera pixels). As expected, the amplitude spectrum for the camera-based measurement approach is smoother, which is consistent with the simulation result in Fig. 5.

 figure: Fig. 8

Fig. 8 Measured amplitude spectrum for the camera-based approach (B(f)) and when using a single photo detector (S(f)).

Download Full Size | PDF

In order to specify the optimal choice of the line width of the subdivided camera image and the influence of the speckle size, several experiments with different speckle sizes are conducted. In the experiments, 10 random segments with a length of 1 mm on the specimen surface are selected for 10 repeated measurements. By changing the size of the mirror in the sensor setup, cf. Fig. 1, the numerical aperture NA and, thus, the speckle size is varied. Note that the speckle size ds reads [20]

ds=2λπNA,
with the laser wavelength λ. The maximum NA is 0.20 due to the limitation of the sensor dimension. The image of the scattered light field is magnified by a factor of 16 with a microscope setup in order to exploit the full camera sensor. Thus, distinct images of the scattered light field can be obtained. Exemplary camera images for the three realized numerical apertures of 0.05, 0.07 and 0.20 are shown in Fig. 9. The higher the numerical aperture, the lower the speckle size. The latter is about 8.6, 5.6 and 2.0 pixels, respectively. The experimental results of the relative velocity uncertainty as a function of the camera line width is finally shown in Fig. 10.

 figure: Fig. 9

Fig. 9 Samples of the measured camera images for the different numerical apertures.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Camera-based relative velocity uncertainty due to speckle as a function of the line width for different numerical apertures, i.e. for different speckle sizes. The larger the numerical aperture the smaller the speckle size and the lower the relative velocity uncertainty. For a single photo detector, i.e. a single line with maximum line width (circles), the relative velocity uncertainty is maximal and amounts to 4.7 × 10−3, 5.7 × 10−3 and 6.1 × 10−3, respectively. Note that the dash lines are theoretical curves for each numerical apertures.

Download Full Size | PDF

The tendency of σv,speckle/v curve agrees with the simulation, cf. Fig. 6. Regarding the different numerical apertures, σv,speckle/v becomes minimal to 1.30 × 10−3, 0.76 × 10−3 and 0.53 × 10−3, i.e. a factor 3.6, 7.5 and 11.5 smaller than without speckle separation. Typically, when NA = 0.20, the 95% confidence interval of the final σv,speckle/v is [0.00049, 0.00057], which shows a good robustness. Note that the minimal relative velocity uncertainty behaves directly proportional to the square root of the speckle size, because the smaller the speckle size the larger the number of speckles on the camera. As a result, the final 3-D shape uncertainty with the camera-based approach is significantly reduced from 1 μm to 380 nm. Comparing with the measurement setup without camera presented in [12], the 3-D shape uncertainty is reduced to the same level by decreasing the σv,speckle/v with M = 10, 000 independent measurement points instead of 625,000 in [12]. Hence, the camera-based speckle separation method can effectively reduce the 3-D shape uncertainty and simultaneously enhance the sampling efficiency.

5. Conclusions

A camera-based speckle separation method is presented for the speckle noise reduction of Doppler frequency based velocity and distance sensors. It utilizes a camera instead of single photo detectors and, thus, the Doppler frequency from each speckle can be evaluated separately without speckle noise.

The relative velocity uncertainty is reduced proportionally with the square root of the number of camera lines. Numerical simulations as well as experiments were performed for verifying and validating the proposed camera-based approach. Simulation and experimental results agree qualitatively and show a good robustness. The relative velocity uncertainty due to speckle decreases when increasing the numerical aperture, i.e. when reducing the speckle size. As a result, the relative surface velocity uncertainty due to speckle is reduced by the order of one magnitude. This leads to a reduction of the total 3-D shape uncertainty from 1 μm to 380 nm, which needs much less measurement points, and the sampling efficiency is improved effectively. Thus, the relative velocity uncertainty due to speckle noise is now negligibly small.

As an outlook, a more sophisticated signal processing based on separating the moving speckles in each camera row as well as in each camera line over time (for instance by speckle tracking) might decrease the measurement uncertainty further. A camera-based P-LDD sensor with the required two cameras is currently set up for the further investigation on the distance uncertainty reduction and, finally, the in-situ shape measurement in a lathe for industrial application. The camera-based speckle separation method also can be applied in other optical sensors that are limited by the speckle effect and do not use matrix cameras yet, such as triangulation sensors or interferometers.

Acknowledgments

The financial supports of the German Research Foundation (DFG) for the project Cz55/29-1 and the China Scholarship Council (CSC) are gratefully acknowledged.

References

1. P. J. Bills, R. Racasan, R. J. Underwood, P. Cann, J. Skinner, A. J. Hart, X. Jiang, and L. Blunt, “Volumetric wear assessment of retrieved metal-on-metal hip prostheses and the impact of measurement uncertainty,” Wear 274, 212–219 (2012). [CrossRef]  

2. K. H. Goh, N. Phillips, and R. Bell, “The applicability of a laser triangulation probe to non-contacting inspection,” Int. J. Prod. Res. 24, 1331–1348 (1986). [CrossRef]  

3. A. Kempe, S. Schlamp, T. Rösgen, and K. Haffner, “Low-coherence interferometric tip-clearance probe,” Opt. Lett. 28, 1323–1325 (2003). [CrossRef]   [PubMed]  

4. G. Sirat and F. Paz, “Conoscopic probes are set to transform industrial metrology,” Sens. Rev. 18, 108–110 (1998). [CrossRef]  

5. S. H. Lu and C. C. Lee, “Measuring large step heights by variable synthetic wavelength interferometry,” Meas. Sci. Technol. 13, 1382–1387 (2002). [CrossRef]  

6. W. Schertenleib, Optical 3-D Measurement Techniques III (Wichmann, 1995).

7. J. Czarske, J. Mobius, K. Moldenhauer, and W. Ertmer, “External cavity laser sensor using synchronously-pumped laser diode for position measurements of rough surfaces,” Electron. Lett. 40, 1584–1586 (2004). [CrossRef]  

8. U. Schnell, S. Gray, and R. Dändliker, “Dispersive white-light interferometry for absolute distance measurement with dielectric multilayer systems on the target,” Opt. Lett. 21, 528–530 (1996). [CrossRef]   [PubMed]  

9. X. L. Dai and K. Seta, “High-accuracy absolute distance measurement by means of wavelength scanning heterodyne interferometry,” Meas. Sci. Technol. 9, 1031–1035 (1998). [CrossRef]  

10. T. Pfister, A. Fischer, and J. Czarske, “Cramér–Rao lower bound of laser Doppler measurements at moving rough surfaces,” Meas. Sci. Technol. 22, 055301 (2011). [CrossRef]  

11. F. Dreier, P. Günther, T. Pfister, J. Czarske, and A. Fischer, “Interferometric sensor system for blade vibration measurements in turbomachine applications,” IEEE Trans. Instrum. Meas. 62, 2297–2302 (2013). [CrossRef]  

12. R. Kuschmierz, A. Davids, S. Metschke, F. Löffler, H. Bosse, J. Czarske, and A. Fischer, “Optical, in situ, three-dimensional, absolute shape measurements in CNC metal working lathes,” Int. J. Adv. Manuf. Tech. 8234, 1–11 (2016).

13. F. Dreier, P. Günther, T. Pfister, and J. Czarske, “Miniaturized nonincremental interferometric fiber-optic distance sensor for turning process monitoring,” Opt. Eng. 51, 014402 (2012). [CrossRef]  

14. P. Günther, R. Kuschmierz, T. Pfister, and J. Czarske, “Displacement, distance, and shape measurements of fast-rotating rough objects by two mutually tilted interference fringe systems,” J. Opt. Soc. Am. A 30, 2–7 (2013). [CrossRef]  

15. P. Günther, T. Pfister, L. Büttner, and J. Czarske, “Laser Doppler distance sensor using phase evaluation,” Opt. Express 17, 2611 (2009). [CrossRef]   [PubMed]  

16. B. E. Truax, F. C. Demarest, and G. E. Sommargren, “Laser Doppler velocimeter for velocity and length measurements of moving surfaces,” Appl. Opt. 23, 67–73 (1984). [CrossRef]   [PubMed]  

17. K. Matsubara, W. Stork, A. Wagner, J. Drescher, and K. D. Müller-Glaser, “Simultaneous measurement of the velocity and the displacement of the moving rough surface by a laser Doppler velocimeter,” Appl. Opt. 36, 4516–4520 (1997). [CrossRef]   [PubMed]  

18. H.-E. Albrecht, M. Borys, N. Damaschke, and C. Tropea, Laser Doppler and Phase Doppler Measurement Techniques (Springer Verlag, 2003). [CrossRef]  

19. R. Kuschmierz, N. Koukourakis, A. Fischer, and J. Czarske, “On the speckle number of interferometric velocity and distance measurements of moving rough surfaces,” Opt. Lett. 39, 5622–5625 (2014). [CrossRef]   [PubMed]  

20. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts, 2007).

21. H. Nobach, “Analysis of dual-burst laser Doppler signals,” Meas. Sci. Technol. 13, 33–44 (2002). [CrossRef]  

22. N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. London, Ser. A 454, 903–995 (1998). [CrossRef]  

23. H. Nobach and E. H. R. van Maanen, “LDA and PDA signal analysis using wavelets,” Exp. Fluids 30, 613–625 (2001). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Principle of P-LDD sensor: superposition of two interference fringe systems with constant and equal fringe spacing d, which are tilted towards each other by an angle ψ. The scattered light from each fringe system is detected with a photo detector (wavelength multiplexing). On the right, one fringe system in the x-y-plane in the center of the measurement volume is depicted.
Fig. 2
Fig. 2 (a) 5 separated speckle signals with the same Doppler frequency but with random amplitudes and arrival times and (b) the corresponding surface signal with the superposed speckle signals. The superposition results in random phase jumps due to the random arrival times, which result in the fluctuations in (c), and distorts the amplitude-frequency spectra (d).
Fig. 3
Fig. 3 Simulation of the scattered light intensity images for a P-LDD sensor at four consecutive time steps. The step size is one half of the fringe spacing. Therefore the images for n = 1 and n = 3 look almost identical but slightly shifted, since the surface movement equals exactly one fringe spacing. The same holds for the images n = 2 and n = 4. However, the images for n = 1 and n = 2 are totally different, because complete different surface positions are illuminated. As an example, note the fluctuation of the intensity of a dominant speckle (red box), which occurs with the Doppler frequency. Note further that only sections of the calculated intensity field of 200 × 200 are presented and the grayscale is inverted for a distinct show.
Fig. 4
Fig. 4 Simulation of the detected signals for (a) the camera-based measurement approach and (b) when using a photo detector. It can be seen in (a) that the phase of signal over time is different for the different camera rows.
Fig. 5
Fig. 5 Determined amplitude spectrum with the camera-based approach (green curve), where the amplitude spectra are determined for each (summed up) camera row and then averaged. For comparison, the formerly resulting amplitude spectrum with a single photo detector is shown (blue curve), i.e. when calculating the amplitude spectrum for the integrated image.
Fig. 6
Fig. 6 The reduction of the relative surface velocity due to speckle. (a) shows the effect of the line width on the relative surface velocity due to speckle in the case of one single line. (b) shows the influence of the different number of lines on the relative surface velocity due to speckle.
Fig. 7
Fig. 7 Camera-based P-LDD sensor setup.
Fig. 8
Fig. 8 Measured amplitude spectrum for the camera-based approach (B(f)) and when using a single photo detector (S(f)).
Fig. 9
Fig. 9 Samples of the measured camera images for the different numerical apertures.
Fig. 10
Fig. 10 Camera-based relative velocity uncertainty due to speckle as a function of the line width for different numerical apertures, i.e. for different speckle sizes. The larger the numerical aperture the smaller the speckle size and the lower the relative velocity uncertainty. For a single photo detector, i.e. a single line with maximum line width (circles), the relative velocity uncertainty is maximal and amounts to 4.7 × 10−3, 5.7 × 10−3 and 6.1 × 10−3, respectively. Note that the dash lines are theoretical curves for each numerical apertures.

Tables (1)

Tables Icon

Table 1 Measurement uncertainty budget for a 3-D shape measurement containing the most significant contributors. The numerical values are derived experimentally. The uncertainties are calculated for a representative 3-D shape measurement with stepwise scanning along the height of the workpiece with M = 104 (102 measurement points per revolution ×102 scanning steps along the y direction, cf. Fig. 1), N = 105 (M × 10 revolutions per scanning step), the mean radius of the workpiece R = 20 mm and a temperature variation of ΔT = 1K.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

v = f D d .
z = φ s 1 ,
r ( α ) = R + Δ r ( α ) = v ¯ ω ( z ¯ z ( α ) ) .
σ v = σ v , noise 2 + σ v , speckle 2 + ( v σ d d ) 2 , σ z = σ z , noise 2 + σ z , speckle 2 .
σ r = R ( ( σ v , noise v N ) 2 + ( σ v , speckle v M ) 2 + ( σ ω ω ) 2 + ( σ d d ) 2 ) + σ z , noise 2 N / M + σ z , speckle 2 ,
b ( n ) = A e 2 f D 2 ( n / f s t a ) 2 t w 2 cos ( 2 π f D ( n / f s t a ) ) , n = 0 , 1 , , N 1 ,
s ( n ) = k = 1 K b k ( n ) , n = 0 , 1 , , N 1 .
E o ( x , y ) = E I ( x , y ) exp { i 2 π / λ 2 h ( x , y ) } .
E d ( x ¯ , y ¯ ) = 2 D 1 { 2 D { E o ( x , y ) } H ( f x , f y ) } ,
S row ( y ¯ , n ) = j = 1 N c s ( x ¯ j , y ¯ , n ) ,
d s = 2 λ π NA ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.