Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical methods for distance and displacement measurements

Open Access Open Access

Abstract

This tutorial reviews various noncontact optical sensing techniques that can be used to measure distances to objects, and related parameters such as displacements, surface profiles, velocities and vibrations. The techniques that are discussed and compared include intensity-based sensing, triangulation, time-of-flight sensing, confocal sensing, Doppler sensing, and various kinds of interferometric sensing with both high- and low-coherence sources.

©2012 Optical Society of America

1. Introduction

In this tutorial we review various noncontact optical sensing techniques that can be used to measure distances to objects, and related parameters such as displacements, surface profiles, velocities and vibrations. The various techniques will be described, stressing the physical basis of each technique, leading to an understanding of its strengths and limitations, accuracy, working range and, where applicable, trade-offs between price and performance.

The relationship between distance measurement and the other parameters mentioned above is straightforward (see Fig. 1): displacement is the change in distance relative to some reference point whose absolute distance may or may not be determined to the same accuracy; surface profiles are obtained by means of the change in distance measured as the object or the sensor is laterally translated with respect to the other; velocity (speed) and vibrations are the time derivative of the distance or displacement, generally referring to one-directional and oscillatory motion, respectively.

 figure: Figure 1

Figure 1 Illustration of the relationship between an absolute distance measurement and derived quantities of displacement, surface profile speed, and vibration.

Download Full Size | PDF

Optical techniques for distance measurement have a large variety of uses and applications, as will be discussed in this tutorial. An optics-based technique may be employed to provide fast or automated measurement, when a noncontact method is needed, or because it is the best or only solution (e.g., for very long and very short distances).

In the following sections we will separately discuss the main different optical techniques that have been applied to distance measurement. This survey is more comprehensive than earlier reviews [1,2] that discussed fewer techniques and other papers [3,4] that discuss the merits of different techniques for specific applications. We first discuss in Section 2 six techniques (intensity-based, triangulation, time-of-flight, confocal, and interferometric sensors, and Doppler sensing), which have all been incorporated into commercially available measurement systems. These techniques are presented in an order that (approximately) reflects increasing price. We do not discuss or compare commercially available instruments, and we only mention specific instruments in the context of accurate reporting of experiments and data obtained. Thus the mention of a specific commercial instrument or system does not imply any recommendation or criticism.

In Section 3 we discuss other techniques, namely, scanning interferometry, frequency modulated continuous wave time-of-flight (FM-CW-TOF), and self-mixing interferometry sensors, which have been presented in the literature as research techniques but, to the best of our knowledge, are not used in common commercially available systems. We also mention fiber Bragg sensors, which have been used for displacement sensing, but unlike the other techniques discussed, do not fall under the category of “noncontact sensing.”

As will be seen in the following sections, some optical distance measurement techniques require continuous light sources and other techniques use pulsed light sources. Similarly, some techniques require highly coherent, narrow bandwidth light; others rely on incoherent light, while for some this is not an issue at all.

It should be noted that this tutorial contains an extensive list of references. This bibliography is not intended to be comprehensive, but is rather a representative list taken from the large body of publications on the discussed topics.

2. Sensing Techniques Used in Commercial Systems

2.1. Intensity-Based Sensors

Intensity-based sensors, one of the simplest and thus one of the first types of optical distance measurement systems used, consist of a light source and a detector, in which the light intensity reflected from the object onto the detector is a function of the distance between the light source/detector and the object. These sensors commonly use optical fibers to transmit the light from the source to the object and from the object to the detector. Intensity-based fiber optic sensors have been commercially available for more than 40 years. The name “Fotonic Sensor,” coined by a veteran manufacturer [5], has sometimes been adopted as a generic term for this type of sensor. Figure 2 is a sketch of a typical intensity-based fiber optic sensor [6]. Note that the sensor may use either single fibers or fiber bundles for illumination and detection [5,7,8]. If a bundle is employed, the illumination (transmitting) and detection (receiving) fibers may be arranged in numerous ways—segregated, ordered, or random [5]. Single-fiber sensors may be arranged in an antiparallel geometry [9,10], as in Fig. 2(a), or a V-shaped geometry [11,12] or with transmission across a gap [13].

 figure: Figure 2

Figure 2 Illustration of a fiber optic intensity sensor for distance measurements. (a) Two-fiber sensor, showing the object and the operational principle (b) Various geometries for fiber bundle sensor heads, copied with permission from [6].

Download Full Size | PDF

Typically, an intensity-based sensor will have a response similar to the one shown in Fig. 3 [14]. The response is characterized by zero signals both at zero distance and at large distances, with an intensity peak at a specific distance, close to the fiber tips. This response can be understood with simple geometric and ray tracing models [610], taking into account the overlap between the object surface area illuminated by the illumination fibers and the surface area “seen” by the detection fibers, as well as the solid angle formed between illuminated surface elements and the detection fiber acceptance area. Thus the signal goes to zero for large distances (solid angle goes to zero) as well as for zero distance (no overlap between illuminated areas and the area seen by the detection fibers). The distance that yields the peak intensity, typically hundreds of micrometers, depends [610] on the fibers’ diameters and numerical apertures (NAs) and the geometrical distribution of illumination and detection fibers.

 figure: Figure 3

Figure 3 Typical signal versus distance response for an intensity-based fiber optic displacement sensor (copied with permission from [14]).

Download Full Size | PDF

The signal response of Fig. 3 presents two opportunities for distance sensing, as there are two regions of quasi-linear variation of the signal with distance, namely, the near and the far sides of the peak, designated, respectively, the “front slope” and the “back slope.” The front slope usually exhibits higher sensitivity and lower dynamic range than the back slope. Working at distances close to the peak is clearly problematic, as the signal changes weakly and extremely nonlinearly with distance, and any measured signal decrease may correspond to either an increase or a decrease in distance.

In order to use the sensor for measuring the absolute distance to an object, the response curve must be known in advance; i.e., a calibration curve such as Fig. 3 must first be obtained by using the specific object itself. Although different objects may exhibit calibration curves of similar shape, the signal amplitude at a certain distance is object dependent, depending on both the absolute reflectance and the diffusivity of the specific object.

The main attractions of this technique for distance sensing are its simplicity and low price and the ability to measure distance at fast repetition rates (typically up to hundreds of kilohertz). Commercially available intensity sensors are offered for various probing distances, from a few millimeters up to about 50 mm, with possible methods for extensions [15]. The resolution on the back slope can be 1 µm or better for shiny objects. However, the technique suffers from a number of limitations:

  • a. Precalibration for all target objects is necessary.
  • b. Since the distance is derived from the measured signal intensity, any change in the signal intensity will be interpreted as a distance change. Thus illumination intensity variations, optical connection losses, variations of the target reflectivity [16], dust, dirt, etc. will be interpreted by default to be distance changes.
  • c. The measured signal can be very sensitive to tilt of the target object—for example, consider a highly reflective object such as a mirror. It is easily seen (Fig. 4) that more light will be collected from an “optimally” tilted target than for the target at normal incidence. Furthermore the tilt effect is different at each distance and for each tilt axis.

 figure: Figure 4

Figure 4 Demonstration of possible sensitivity of intensity-based fiber optic displacement sensor to the tilt angle of specular or shiny objects (e.g., a mirror). The lower illustration shows why a mirror at distance d will specularly reflect more light into the receiving fiber at a tilt angle tanθ=a/d than at normal incidence. a, separation of fiber centers.

Download Full Size | PDF

We have recently described [17] a striking effect related to points (b) and (c) above. We have probed machined metal surfaces with surface roughness typically in the range of 1–3 µm. The experiment was performed with various intensity sensors, differing in the collection area of the receiving fiber(s): a single 100 µm diameter core fiber, a single 500 µm diameter core fiber, and a 1.1 mm diameter bundle of 100 µm core fibers (see Fig. 5). The latter two sensors were commercial sensors, manufactured, respectively, by Baumer (P/N FUE200C012) and Philtec (P/N D47-AB1). Each sensor was used to probe the object from the same standoff distance (1.5 mm) while the object was translated laterally with respect to the fiber sensor. Thus the experiment probed different areas of the object surface at constant distance, and one would expect that essentially constant signals should be measured with possibly small (<2%) deviations arising from noise and the <3 µm surface roughness. However, as shown in Fig. 5, as the object is translated laterally the first (smallest) sensor produced signals fluctuating in their intensity by up to 50%. The 500 µm core fiber showed much smaller variations (<10%), while the sensor with the 1.1 mm diameter bundle of signal fibers showed <2% variations. If the 50% and 10% variations are interpreted by using the calibration curve (Fig. 3), distance changes of hundreds or tens of micrometers, respectively, would erroneously be inferred. The signal variations may be explained [17] by modeling the surface as a series of small facets with local tilts randomly oriented about the macroscopic surface normal [18] and by considering light rays being reflected specularly with respect to each local surface normal (see Fig. 6). The relative signal intensity is simulated by ray tracing and by counting the number of rays reflected into the signal fiber(s).

 figure: Figure 5

Figure 5 Normalized measured responses of three different fiber sensors at constant distance of 1.5 mm from a machined aluminum metal target (with <3 µm surface roughness), while the target is translated laterally. The nominal distance to the target remains constant, but different areas of the target are exposed to the input light.

Download Full Size | PDF

 figure: Figure 6

Figure 6 Surface roughness model, showing light rays reflected specularly according to the local surface normal. The receiving fiber(s) collect different amounts of light from different sections of the surface.

Download Full Size | PDF

When different facets of the surface are illuminated, the map of reflected rays changes, and with it the fraction of rays successfully reflected into the signal fiber within its NA. For a single 0.27 NA 100 µm core receiving fiber at 1.5 mm from the surface, these simulations [17] reproduce the large (50%) variations upon lateral translation. As the receiving fiber(s) diameter increases, the number of successfully collected rays increases, statistics improve, and the variations with lateral motion decrease. This effect does not occur for a mirror target because of its translational symmetry or for a true diffuse scatterer with uniformity and translational symmetry on the scale of spot size (tens to a few hundred of micrometers). Accordingly, the 100 µm core sensor did not exhibit any irregularities when translated laterally with respect to either a mirror or a standard white scatterer (see Fig. 7).

 figure: Figure 7

Figure 7 The effect of lateral translation of various target objects relative to the sensor with a single-mode transmitting fiber and a 100 µm core receiving fiber. Results are shown for three target objects: a mirror, a standard white scattering surface (Labsphere USRS-99-010), and the machined metal surface of Fig. 5.

Download Full Size | PDF

Although the results shown in Figs. 47 illustrate anomalies or limitations to the use of some intensity sensors, it is not our intention to suggest that these sensors are flawed or should be avoided. These sensors have been successfully used in many applications [19,20], when an appropriate fiber sensor head and geometry was used for the object surface under test. Modifications have also been suggested to overcome some of these limitations: e.g., effects of inherent variations in the surface reflectivity can be eliminated by using the ratio of signals from two receiving fibers at different distances from the surface [16,2123]. Other configurations offering signal improvement include circular arrangements of receiving fibers [18,24]. We also stress that for the machined surface tested in Fig. 5 the two commercial sensors, based on large collection areas, did not show the anomaly exhibited by the small fiber sensor.

2.2. Triangulation Sensors

Triangulation refers to a procedure in which a distance or position is determined from considerations based on the geometries of similar triangles. This method was used around 600 BC by the Greek mathematician Thales of Miletus to measure the height of the pyramids of Giza and also to determine the distance to a ship at sea [25].

Triangulation-based optical sensors [1,26] will usually operate as shown in Fig. 8. A collimated laser source is used to illuminate the object to be measured. The camera lens optics, laterally displaced from the laser source, images the laser spot on the object onto a position-sensitive detector (typically an array of pixel detectors). As seen in the figure, the distance to the object can be determined from the similar triangles formed as shown.

 figure: Figure 8

Figure 8 Principle of optical triangulation sensor. The unknown distance, D, is determined from the known distances E,F and the measured value of G—the distance to the pixel in the position sensitive detector (PSD) recording the image of the laser spot on the measured object.

Download Full Size | PDF

Commercially available triangulation sensors, with both the source and detector integrated into a single package (as illustrated in Fig. 8), are generally applicable for distance measurements in ranges of approximately 10 mm to 1 m [27]. It is clear that the angular geometry of the sensing principle imposes both a minimum and a maximum sensing distance, beyond which accurate measurements cannot be made: since the laser spot is imaged on the linear detector array, the nearest and farthest detector pixel locations, combined with the optical magnification, will impose these limits.

The distance resolution of a triangulation sensor depends both on the laser beam size and detection pixel size [28]. Within the designed working range, the resolution will also vary with the distance—the highest resolution (possibly several micrometers) will be found close to the minimum sensing distance. As the distance increases toward the maximum sensing distance, the resolution progressively decreases. This is easily understood from Fig. 8, by considering how far the object must shift in order to shift its image on the detector by one pixel.

Triangulation sensors offer the advantages of a low price and fast measurement (tens or hundreds of kilohertz are possible). Adding the capability to scan the position of the triangulation laser spot in the object plane opens up the possibility for object shape sensing [27,29,30]. In addition to the distance and accuracy limitations discussed above, triangulation sensors have some additional limitations:

  • a. They do not work well for clear and transparent objects such as glass, water, or liquid surfaces because of the poor visibility of the laser spot on these surfaces.
  • b. The triangulation geometry means that the sensor head must have a minimum width to provide the distance between its transmitter and detector, which prevents its use for sensing inside objects through narrow openings. Furthermore, the sensor head width must be increased as the desired working distance increases, meaning that sensors for distances of meters may become inconveniently large.

Some other applications of triangulation in sensing should be mentioned. Triangulation principles can be applied to determine the exact location of an object from the angles of view recorded from two different known locations (see Fig. 9(a)), for example, photographs taken by two surveillance cameras [31]. Similarly, location can be determined from two measurements of distance, but unknown angles, from two different known locations (see Fig. 9(b)). This will be elaborated further in the following subone-way time-of-flight dividedsection with respect to ground penetrating radar.

 figure: Figure 9

Figure 9 (a) Triangulation based on measurements of angles of view θP and θQ from two known observation points, P and Q. The object is located at the intersection of the two lines drawn. (b) Triangulation based on measurement of distances ZP and ZQ from two known observation points, P and Q. The object is located at the intersection of the two arcs drawn.

Download Full Size | PDF

2.3. Time-of-Flight Sensors

Another very common method for distance measurement is “time of flight,” where the distance is found by sending electromagnetic waves (e.g., light) to the object and measuring the time taken for the waves to travel from the sensor to the object and back [1]. The distance is therefore given by the one-way time-of-flight multiplied by the speed of light. Except for some special cases, the fraction of light backscattered to the sensor is orders of magnitude weaker than the source, and so a very sensitive detector and/or signal averaging is required.

The first common application for this method (developed during the Second World War) was radar (an acronym for radio detection and ranging) using microwave and radio-frequency sources and requiring very large transmitters and receivers. The advent of the laser led to the development of more compact (including handheld) time-of-flight measurement systems, based on visible or near-IR light.

Most laser time-of-flight sensors operate by sending a short (typically nanosecond duration) light pulse to the object (see Fig. 10(a)). For objects at a distance >50 m, corresponding to a round-trip transit time far greater than the pulse width, the time-of-flight can be measured by relatively simple detectors and electronics. At distances shorter than tens of meters accurate time-of-flight measurements need to take into account the temporal pulse shape in order to correctly measure the time delay between the peaks of the input and returned pulses. Eventually the input and returned pulses will overlap in time, and photon-counting techniques [32] or very fast detection with autocorrelation algorithms [33] should be used to evaluate the time delay.

An alternative approach to the pulse illumination is to use amplitude modulated continuous light [34,35] (see Fig. 10(b)). In this case a phase shift in the modulation signal is measured between the launched and the returned light, and the time-of-flight is determined by dividing the phase shift by the modulation frequency. The true phase shift is the measured residual phase shift plus an integral number of full cycles (2π phase shifts). This ambiguity can be eliminated and the true phase shift found by measuring at an additional (nonharmonic) modulation frequency. This approach is most practical for measuring distances in the intermediate region from a few meters up to 50 m (times of flight a few times larger than typical short pulses), but more difficult for distances shorter than 1 m, as the required modulation rate approaches the gigahertz range.

 figure: Figure 10

Figure 10 (a) Pulsed time-of-flight measurement showing three regimes where the time of flight is (i) much longer, (ii) similar to, and (iii) shorter than the pulse width. For nanosecond pulses these cases correspond to distances (in air) of >50 m, a few meters, and <1 m, respectively. (b) Intensity modulated time-of-flight is an alternative method suitable for distances in range (ii). The phase shift is indicated by the double-headed arrow.

Download Full Size | PDF

Distances can also be measured by frequency (rather than amplitude) modulated light, as will be discussed in Subsection 3.2.

Time-of-flight sensing in pulsed mode is the dominant sensing technique for distances longer than 50 m and is routinely used for many civilian applications of range sensing, such as mapping and surveying. In these applications, resolution and accuracy generally depend on the accuracy of the electronics. The applicable maximum range distance will depend on the laser power, the detector sensitivity, the reflectivity and visibility of the target object, and the signal/noise ratio consequently obtained. Recently time-of-flight cameras have been introduced [3638], permitting simultaneous measurement of distances to objects detected by different pixels. Military applications include handheld range finders, both handheld [39] and integrated into weapon systems. Another common application of time-of-flight optical sensing is in traffic speed enforcement, using the laser speed gun, which repeatedly measures the distance to a moving car at a high repetition rate. The car’s speed is then determined from the rate of change of the distance with time.

Time-of-flight techniques can also be used for non-line-of-sight applications. Radar has already been mentioned. As well as its use in aviation, radar has become important in applications from civil engineering to archaeology. Ground penetrating radar can reveal discontinuities and objects buried underground or faults in concrete structures [40,41]. The distance to the features of interest is determined by ranging from at least two points, and their position is then determined by triangulation (see Fig. 9(b)).

In some applications acoustic (ultrasound) time-of-flight sensing may be favored over optical time-of-flight sensing [4]. This is true for ranges up to a couple of meters, where optical sensors will have times of flight comparable with nanosecond pulse widths. Since the speed of sound is about 6 orders of magnitude smaller than the speed of light, acoustic sensors’ times-of-flight will be much longer than pulse widths, facilitating easy measurement. Thus acoustic sensors are often preferred for applications such as robotic sensing and automobile reverse/parking sensors, where high accuracy is not required and relatively low cost is important.

2.4. Confocal Sensors

Another type of optical sensor, generally applicable for accurate measurements of displacements and surface profiles at distances of millimeters, is the confocal sensor. Two versions will be considered: monochromatic confocal sensors [42,43] and polychromatic confocal sensors [4446].

The confocal principle, used in the well-known confocal microscope, is to use the same optics to tightly focus light emanating from an aperture onto the object and to detect the light scattered from the object back into the aperture [47,48]. The maximum signal is found (see Fig. 11) when the object is situated precisely at the image plane defined by the aperture and optics [49]. The signal, i.e., the amount of light returning to the illuminating aperture, varies very sharply with the object position (the full width at half-maximum may be just a few micrometers), permitting submicrometer accuracy for tracking displacements of the object about the image plane [4246].

In a confocal sensor an optical fiber is an excellent choice for an aperture. A monochromatic confocal sensor—employing a monochromatic light source—commonly operates in closed loop, employing a feedback mechanism to ensure that the sensor is positioned to receive the maximum signal from the object [42,43]. Either the sensor as a whole is translated, or the lens position is adjusted with respect to the fiber. Common applications for such a sensor are minute displacement measurements and surface profiling—i.e., measuring the distance change when the sensor is translated laterally with respect to the object.

 figure: Figure 11

Figure 11 Principle of fiber optic monochromatic and polychromatic confocal sensing. (i) Monochromatic confocal sensor with an object at the image plane. (ii) Monochromatic confocal sensor with an object displaced from the image plane (adapted from [49]). (iii) Polychromatic sensor where the image plane position varies with wavelength and a different wavelength will satisfy the confocal geometry for each object position.

Download Full Size | PDF

In polychromatic confocal sensing (sometimes simply called chromatic confocal sensing) multiple wavelengths of light [50] or a highly broadband source [4446,51] are used. Chromatic dispersion of the optics will lead to the different wavelength components’ being imaged at different longitudinal points along the optic axis (see Fig. 11). Thus in the region of the image plane each point along the optical axis is the image point of a specific wavelength. This wavelength will be the dominant wavelength in the light backscattered confocally to the detector by an object at this point. Consequently, spectral measurement of the backscattered light can be translated very accurately to object position.

The resolution of a chromatic confocal sensor is related to the magnitude of the chromatic dispersion. In most imaging applications chromatic dispersion is a hindrance, and multiple lens systems (e.g., achromatic doublets) are commonly employed to counteract it. However, in confocal sensing enhanced chromatic dispersion is desired. This may be achieved [44] using a single element lens of high chromatic dispersion glass (low Abbe number), or other means such as the addition of a diffractive optical element [52].

Chromatic confocal sensors offer submicrometer distance resolution over a range of several millimeters. The working distance depends on the focal length of the optics, and typically may be chosen from 1 mm up to several tens of millimeters. Owing to the requirement of tight focusing, high-quality, low f-number optics is employed, and thus longer working distances become impractical. Another important advantage of confocal sensing is that the wavelength versus distance response is independent of the type of object studied, be it reflective or diffuse, light or dark in color. However, samples of low reflectivity will be prone to a lower signal-to-noise ratio than highly reflective samples, but this can often be rectified by longer measurement times and/or signal averaging. Commercially available sensors can be operated at rates of up to tens of kilohertz against relatively “cooperative” objects.

A note of caution should be mentioned regarding the sensitivity of these sensors with respect to temperature—see the discussion in Subsection 4.1.

2.5. Interferometric Sensors

There exist many and varied reports in the literature of interferometric techniques for distance and displacement measurements. The classic Michelson interferometer configuration can be used with a highly coherent laser to measure subwavelength (nanometer range) displacements of an object (see, for example, [53]). Basically, interferometry is more appropriate for displacement monitoring than for absolute distance measurement, though multiwavelength and scanning interferometry permit absolute distance measurement as well. However, since these techniques generally require a highly stable optical setup, they are not often used in commercial systems, and their discussion will be postponed until Subsection 3.1.

Interferometric distance sensors that have been successfully commercialized are mostly based on low-coherence sources (see Fig. 12). Low-coherence interferometry has been achieved under a number of variations, and different terminologies have sometimes been used by different authors, leading to some confusion in nomenclature. Early studies [5456] preferred the term “white-light interferometry” (WLI), but more recently, and especially in biological and medical applications, the name “optical coherence tomography” (OCT) has been preferred [5760]. The latter term indeed emphasizes the utility of this technique for producing multidimensional images by successive optical probing of different points on the object, i.e., profiling. From an optical point of view, one can classify the various interferometric techniques as either “time domain interferometry” or “frequency domain interferometry.” In the discussion that follows we will use the term WLI to denote time domain interferometry (i.e., the reference position is scanned), and OCT for frequency domain interferometry.

As in any interferometry, WLI combines the outputs of the two arms of the interferometer. If the path difference of the two arms is greater than the coherence length of the low-coherence source, then no variation will be measured in the total output as the length of the reference arm is varied. If, however, the path difference is less than the source coherence length, then typical sinusoidal variations will be measured in the output as the reference arm length is varied—and the maximum contrast will be recorded when the path lengths are exactly balanced (see Fig. 12).

 figure: Figure 12

Figure 12 Basic setup for WLI and typical interference pattern as a function of the reference arm mirror movement. A low-coherence source is used, in this case a fiber-coupled 1.5 µm superluminescent diode with 60 nm spectral width. The interferogram shows an oscillation period of half the wavelength, and width corresponding to the coherence length of the source.

Download Full Size | PDF

When WLI is used as a distance or displacement sensor [54,56,61,62] the position of the object is deduced by scanning the position of the reference reflector to find the point at which the two arms are exactly balanced, i.e., the maximum fringe contrast. The light source may be a visible or near-IR diode or incandescent source with spectral width of up to several hundred nanometers. The recently developed fiber-coupled superluminescent diodes (SLEDs) are particularly attractive for this application. These sources have coherence lengths in the micrometer range. The coherence length lc is given by the width of Fourier transform of the spectrum multiplied by c. The relation for an ideal Gaussian band shape

lc=λ2Δλ,
where λ and Δλ are the central wavelength and the spectral width, serves as an adequate approximation for most broadband sources.

Since WLI systems require a full scan of the reference reflector in order to determine a single object distance, it is a relatively slow technique, adequate for measuring distances to, or displacements of, quasi-static objects. WLI can, in principle, determine displacements with an accuracy of a fraction of a wavelength [54,63], although in practice other issues such as signal-to-noise ratio and the accuracy of the reference scan will often limit the accuracy to around 1 µm.

WLI systems for various applications have been described. As some examples we mention thickness of silicon wafers [61] and distances in machine-made industrial parts [62]. In WLI systems it is usually necessary to ensure that the two arms of the interferometer are always at identical temperatures; for a case where this is not ensured, we have demonstrated a temperature independent WLI design [64].

As mentioned above, an alternative to scanning the reference reflector position is to scan in the frequency (wavelength) domain—the OCT approach [57,65]. The reference is kept at a fixed position and the superposition of signals from the object and reference is measured as a function of optical frequency. Two different approaches are possible:

  • (i) Using a broadband source as in WLI, and analyzing the spectral content with filters or monochromators or by dispersing the different frequencies onto a detector array
  • (ii) Using a narrowband source that is tunable over a large range [66].

In either case, the object position is determined from the Fourier transform of the spectral data, illustrating the mathematical equivalence of time domain and frequency domain scanning. In practice, the distance accuracy of frequency scanning (OCT) will be limited by the frequency span of the scan, and therefore OCT systems rarely offer accuracies better than 5 µm. Their main advantage over time-domain systems (i.e., WLI) is the potentially much higher scanning speed.

Although time-domain interferometry (WLI) may be cheaper and may offer higher resolution than frequency domain interferometry (OCT), the need for mechanical scanning of the reference position at every point limits its speed relative to OCT, as mentioned above. OCT scanning at up to megahertz rates is possible [67], which gives a crucial advantage to OCT in medical imaging applications. OCT systems have achieved widespread use in ophthalmology for eye examinations [68,69]. Other OCT systems, coupled to an optical fiber inside a catheter, are used in cardiology to image plaque blocking arteries [70], and in general endoscopy [71]. Infrared-light-based OCT can penetrate skin to image cancers [72]. In some of the latter medical applications the light path to the object and back to the detector is not through direct straight-line air paths, but rather through much more complicated multiscattering processes and elaborate paths through, e.g., skin, rendering the mathematical description of the process much more complicated. However, the principle remains the same: in order for two light beams, whose origin is a low-coherence source, to form interference fringes, their path lengths must be equal to within the source coherence length.

Many examples of nonmedical industrial applications of OCT have been reviewed by Stifter [73], among them examination of electronic components and injection-molded plastics, detection of near-surface defects in ceramics, examinations of works of art, and data storage.

By now we have scanned the most common and commercialized approaches to basic optical distance measurements. It should, however, be noted that other less widespread techniques have been commercialized, such as for a niche application, and/or on the basis of patented technology. As an example of this we mention conoscopic holography [74,75] for three-dimensional shape measurement.

We now turn to describe how temporal changes of the distance may be measured by the techniques described in Subsections 2.12.5.

2.6. Measurements of Velocity and Vibration Based on Successive Distance Measurements

We described above how successive distance measurements (e.g., by time of flight) can determine the rate of change in object position, and its common use in law enforcement for measuring the speed of cars. Similarly, successive distance measurement can be used to measure periodic motion such as vibrations. This is of course subject to the following restrictions:

  • (i) The amplitude of motion is larger than the distance resolution of the measurement, and
  • (ii) The period of motion is significantly longer than the time required for each individual distance measurement.

As long as these two conditions apply, one may use any of the distance measuring techniques described above to track vibrations. An interesting intensity sensor for displacement and vibration in a microelectromechanical systems device has been described [76]. In Fig. 13 we show how a commercial chromatic confocal sensor (Micro Epsilon controller IFC 2401 and head IFS 2402-4) measuring at 2 kHz can track an object vibrating at 64 Hz, with an amplitude of approximately 60 µm.

 figure: Figure 13

Figure 13 Measurement of a vibrating object by chromatic confocal sensor.

Download Full Size | PDF

Another approach to velocity measurement is to employ several sensors, each probing a different known point along the path of motion of the object. The velocity is obtained from the time difference between the peak responses of adjacent sensors. We have measured [77] ballistic velocities in a system where four optical fibers, in a collinear array, each served as a confocal sensor able to probe when the object passes its conjugate image point. By using two different probe wavelengths, eight image points are created. This system enabled measurement of ballistic velocities (1.7 km/s) to an accuracy of 0.3%.

2.7. Direct Velocity Measurement—Doppler Sensing

An alternative optical technique for direct measurement of velocity and vibration is optical Doppler sensing, which deduces the velocity of moving objects by detecting the Doppler frequency shift of the light reflected from the moving object [78,79]. The origins of this technique are also in (coherent) radar applications, which predate [80] applications using optical frequency sources, which are the focus of this tutorial.

In optical Doppler sensing the light source is a highly coherent CW laser, and the detection is usually by a heterodyne technique. In Fig. 14 we show a simple demonstration of this effect for an object having constant velocity. The detector simultaneously receives the launched frequency (backreflection from the fiber tip) and the Doppler shifted frequency from the moving object, and these two waves, when added coherently, produce a beat at the difference frequency, easily seen on an oscilloscope (Fig. 14(b)). Fourier transform analysis of the detector signal, performed by an electronic spectrum analyzer (Fig. 14(c)) shows that the observed beat frequency for an object velocity of 100 µm/s is around 130 Hz. This agrees very well with the expected value of the Doppler shift, given by

Δff=2νcorλΔf=2ν.

Substituting λ=1.5×106m and ν=104m/s gives Δf=133Hz (note the factor of 2 in the Doppler shift [78,79] as shifts occur both for the light observed by the object and for the reflection).

Similarly, the spectrum analyzer reveals beat frequencies lower by factors of 2 and 4 for correspondingly lower velocities, 50 µm/s and 25 µm/s— see Fig. 14(c).

 figure: Figure 14

Figure 14 Demonstration of fiber-optic-assisted Doppler velocimetry. (a) The experimental setup, using a pigtailed coherent 1.5 µm laser source and a moving mirror. (b) Detector output measured by an oscilloscope for a motion of 100 µm/s. (c) Detector output measured by an electronic spectrum analyzer for a motion of 100 µm/s (black), 50 µm/s (blue), and 25 µm/s (red).

Download Full Size | PDF

Important applications of Doppler-based optical velocity measurement should be mentioned. Health care applications include arterial blood flow measurements [81,82]. Remote wind measurement is performed by using both radio frequency (radar) and optical frequency (lidar) sources [8385]. Although less common today than time-of-flight-based sensing, Doppler shifted radar sensing is also used in law enforcement for measuring the speed of cars.

A simple calculation shows that the Doppler shift imparted to optical frequency light (1014Hz) by an object moving at ballistic velocities of kilometers per second will be in the gigahertz range. Direct Doppler shift detection at these frequencies has become feasible in recent years [86], supplementing interferometric fringe techniques developed earlier [87,88].

For more complex motions, such as vibrations, the Doppler velocity shift will vary periodically (superposition of one or more sinusoidal motions), and the vibration data is extracted by mathematical analysis (demodulation) of the Doppler spectrum [89]. This analysis solves both vibration frequency and amplitude and can detect much smaller magnitudes and much faster frequencies than demonstrated in Fig. 13. Doppler vibrometry analysis is routinely performed [8991] as a quality control check during production of consumer items (e.g., automobiles, refrigerators, and washing machines), as excessive or abnormal vibration during operation will highlight manufacturing faults and weaknesses. To this end, commercial systems are available offering single-point and multipoint vibrational analysis. Note that the laser is Doppler shifted only by the component of the vibration along the propagation axis, and a combination of beams may be used [89,90] to provide multidimensional vibration analysis.

3. Other Optical Distance Sensing Techniques

3.1. Multiple-Wavelength and Scanning Interferometry

When coherent light of wavelength λ is used to measure a physical distance L in an interferometric setup (i.e., the distance to be measured forms part of the interferometer), the measurement result can only be interpreted in terms of a measured residual phase ϕ plus an unknown number N of full wavelengths

L=(N+ϕ/2π)λ.

The value of L, the absolute distance, can be obtained from additional interferometric measurements by using at least one other independent wavelength [92,93]. Strictly speaking, interferometric measurements at two (or more) wavelengths will always have one more unknown quantity than the number of equations, as in Eq. (3); however, the equations can usually be solved under the constraints of integral N values and an approximate knowledge of L from low-resolution measurement.

Alternatively, the laser frequency (wavelength) can be carefully scanned [94,95] and the interference signal will pass through a series of fringes, whose frequency spacing will reveal the absolute value of L. This value can be determined more precisely by performing a scan between two known wavelengths and counting the number of fringes observed during this scan.

These approaches allow absolute distance measurement of high accuracy—typically the uncertainty may reach a low value of the order of 106107 times the absolute distance [96,97]. The ultimate precision depends on the mechanical stability of the optical setup, the uncertainty in the laser wavelength, and phase jumps when scanning the laser.

3.2. Frequency Modulated Continuous Wave Time-of-Flight

Scanning interferometry is similar to another technique known as FM-CW-time-of-flight (FM-CW-TOF) [98,99], which also has its origin in radar applications. As in pulsed time-of-flight, one detects light backreflected from the target, the round-trip time of flight tf is deduced, and from it the absolute distance. However, instead of using laser pulses, FM-CW light is used. For a linear chirp of frequency with time, dν/dt light back reflected from the target will differ by a frequency (dν/dt)tf from the light emitted at that instance by the source. Combining these waves at a detector will lead to a beat frequency of (dν/dt)tf. Since the frequency tuning range is limited, the frequency tuning should follow a periodic sawtooth or triangular path in order to extend the measurement time to permit more accurate determination of the beat frequency.

The first reported optical FM-CW-TOF used diode lasers whose wavelength output was current dependent, but achieving the goal of linear dν/dt was sometimes difficult, since linear current modulation did not yield linear frequency modulation. Another wavelength scanning technique uses cavity tunable lasers modulated by a linear sawtooth voltage. Recent improvements in semiconductor diode technology have led [100] to a renewal of interest in these sources for FM-CW-TOF.

Since both Doppler velocimetry and FM-CW-TOF involve measurement of beat frequencies between light reflected from the target and a reference, it is possible to simultaneously measure the distance and velocity of a moving object by FM-CW-TOF [101]. Similarly, scanning source interferometry, which measures a sinusoidal optical interference signal, has been used to also measure an additional minute sinusoidal oscillation (vibration) of an object superposed on the regular interference signal [102].

3.3. Self-Mixing Interferometry

Another interferometric technique used to measure displacement is self-mixing interferometry. This approach first appeared 15–20 years ago [103,104], and comprehensive reviews of this field have been published [105,106].

Self-mixing interferometry uses a single-mode laser, usually a simple diode laser, and exploits the fact [104,106] that any reflection (feedback) from the target—even in the parts per million range—which reenters the diode cavity will modulate the laser output. This occurs because the laser must achieve stability not only in its “internal” cavity, but also in the “external” cavity formed by the target and the diode internal cavity [104]. This modulation is most conveniently measured in the amplitude (power) domain, often simply by tracking the monitor photodiode in the laser package [106]. Because of this feedback, as the target object is displaced, the laser power output oscillates; one oscillation period corresponds to a target displacement of half the wavelength. If the feedback is moderately strong, the oscillations are asymmetrically distorted, and it is possible to distinguish between advancing and receding displacement [106].

The self-mixing technique is inexpensive and offers high accuracy in displacement measurement, which can be extended to velocity and vibration measurements [107]. Measurement of two targets has also been reported [108]. Absolute distance measurement has been reported by observing self-mixing under current modulated operation [109,110] as in FM-CW.

3.4. Measurements with Fiber Bragg Gratings

It should also be mentioned that displacement can be measured to high accuracy with fiber Bragg gratings (FBGs). The periodic gratings can be inscribed in optical fibers and reflect only the wavelength of light that corresponds exactly to the periodicity of the grating (Bragg condition). If the fiber is heated, stretched, or strained, the grating dimensions, as well as the fiber index of refraction, change, and consequently the Bragg wavelength changes. Thus these gratings are often employed as sensors [111113], with strain resolutions of typically 10 microstrains (10 parts per million).

FBGs can be employed as sensitive displacement sensors in several ways. One example, shown in Fig. 15, is based on attaching (usually by gluing) one point of the fiber at a fixed support point, and a second point to the object of interest with the FBG between them [114116]. Displacement of the object will strain the fiber, which is monitored by optical interrogation of the Bragg wavelength. Needless to say that the mere existence of the fiber, glued to both points, may severely affect the displacement to be measured and restricts it to no more than approximately 1% of the initial distance (in the case of silica fibers). In fact this is obviously not a “noncontact” measurement. Other reported implementations have measured displacements perpendicular to the fiber axis [117,118].

 figure: Figure 15

Figure 15 Schematic diagram showing how a FBG can be used to monitor displacement of the object. The red dots denote that the fiber is glued as shown to the object and a fixed support.

Download Full Size | PDF

Regular single-mode optical fibers (i.e., without FBGs) can also be used to measure displacements. The setup of Fig. 15 is modified so that a regular fiber is mounted in a loop between the fixed support and the object. Displacement of the object changes the loop diameter and consequently the bending loss associated with it [119]. Sensitivity could be increased by taking advantage of the fact that there are wavelength-dependent interferences in the bending loss spectrum [120]. Another embodiment of bend-loss dependence [121] uses a single-mode–multimode–single-mode fiber sequence, where the bending induced by the object displacement is felt by the multimode segment, changing the coupling strength to the output single-mode fiber.

4. Summary

4.1. Thermal Effects

When displacement measurements are performed with high accuracy, one must be careful to avoid spurious experimental effects such as external vibrations or shocks and temperature changes, which can influence the measured quantity.

These effects are well known in interferometry, and will not be discussed here. In intensity-based systems, the temperature effects should not be severe, though shocks and vibrations may certainly influence the measurements. In the course of our work, we have considered more closely the case of thermal effects on confocal sensing and we present here, as an example, our main findings.

The wavelength versus distance response of a confocal sensor is calibrated at a specific temperature, and under such conditions submicrometer accuracy and resolution are possible. However, if the temperature changes, the wavelength versus distance curve will change mostly because of the thermo-optic coefficient (n/T) of the lens material, as well mechanical thermal expansion. Thermal changes, even of the order of one degree, will affect the absolute distance measurement to a similar magnitude as the constant temperature accuracy [122]. This can be seen by the simple following calculation, realizing that the confocal sensor “translates” the peak wavelength to distance via a calibration function depending on the lens refractive index n(λ,T).

If the temperature varies from the standard temperature T0 to (T0+ΔT), the confocal wavelength at a constant distance will be (λ0+Δλ), where λ0 is the confocal wavelength at T0. Thus

n(λ0,T0)=n(λ0+Δλ,T0+ΔT),
and so
Δλnλ=ΔTnT.

Typical magnitudes for silica glasses at visible wavelengths are n/λ=2×105/nm and n/T=8×106/°C, so we expect |Δλ/ΔT|=0.4nm/°C.

Thus, due to the thermo-optic effect, a 1° temperature change will typically change the confocal wavelength by 0.4 nm (mechanical thermal expansion is assumed to be a lesser effect). For a confocal sensor typically using 400 nm bandwidth of light and offering a working range of 2 mm, the average dependence of the confocal distance on wavelength dZ/dλ will be 5 µm/nm. Thus the 1° temperature change, which led to a change of 0.4 nm in the confocal wavelength, will erroneously be interpreted by the sensor to be a 2 µm displacement. This value is indeed the experimental temperature sensitivity of the confocal sensor reported by Litwin et al. [122].

Thus, in the absence of temperature stabilization (or, alternatively, correction), the absolute accuracy of confocal sensing is somewhat reduced.

4.2. Concluding Remarks

This tutorial has surveyed various optical techniques which can be used for distance measurements, as well as derived quantities such as displacement, surface structure, velocity and vibration.

In Fig. 16 we present a diagram showing the applicable accuracy and working distances of the techniques discussed. The boundaries of the regions of applicability of each type of sensor are obviously approximate and should be used for guidance only.

 figure: Figure 16

Figure 16 Comparison of typical resolutions and working distances of the optical distance measurement techniques discussed in this article. The entry for triangulation is divided into a filled region, corresponding to measurement from a single observation point, and a shaded region for multiple observation points.

Download Full Size | PDF

The main features that are evident from the graph are that the resolution of most techniques decreases as working distance increases; the exception is laser interferometry, which offers a wide distance dynamic range for a reasonably constant accuracy. This technique, as mentioned in the text, is more complex and costly than the other simpler techniques.

Acknowledgments

This article is based on an oral tutorial presented by G. Berkovic at the IEEE Sensors 2011 conference in Limerick, Ireland. Most of this manuscript was written while G. Berkovic was on sabbatical leave at the Faculty of Engineering, Bar Ilan University, Ramat Gan 52900, Israel. The authors thank Mr. Gil Atar for enlightening them on the connection between Thales of Miletus and laser triangulation, and Cheng Wu, President of eFunda Inc., for permission to reproduce figures from his web site.

References and Notes

1. M.-C. Amann, T. Bosch, M. Lescure, R. Myllylä, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40(1), 10–19 (2001). [CrossRef]  

2. P. M. B. S. Girao, O. A. Postolache, J. A. B. Faria, and J. M. C. D. Pereira, “An overview and a contribution to the optical measurement of linear displacement,” IEEE Sens. J. 1(4), 322–331 (2001). [CrossRef]  

3. P. J. Boltryk, M. Hill, J. W. McBride, and A. Nascè, “A comparison of precision optical displacement sensors for the 3D measurement of complex surface profiles,” Sens. Actuators A Phys. 142(1), 2–11 (2008). [CrossRef]  

4. J. Borenstein, H. R. Everett, and L. Feng, Navigating Mobile Robots: Sensors and Techniques (A. K. Peters, 1995).

5. C. Menadier, C. Kissinger, and H. Adkins, “The fotonic sensor,” Instruments Control Syst. 40, 114–120 (1967).

6. “Fiber optic sensors: Introduction,” http://www.efunda.com/DesignStandards/sensors/fotonic/fotonic_intro.cfm.

7. R. O. Cook and C. W. Hamm, “Fiber optic lever displacement transducer,” Appl. Opt. 18(19), 3230–3241 (1979). [PubMed]   [CrossRef]  

8. A. Shimamoto and K. Tanaka, “Geometrical analysis of an optical fiber bundle displacement sensor,” Appl. Opt. 35(34), 6767–6774 (1996). [PubMed]   [CrossRef]  

9. H. Wang, “Reflective fibre optical displacement sensors for the inspection of tilted objects,” Opt. Quantum Electron. 28(11), 1655–1668 (1996). [CrossRef]  

10. H. Golnabi and P. Azimi, “Design and operation of a double-fiber displacement sensor,” Opt. Commun. 281(4), 614–620 (2008). [CrossRef]  

11. W. H. Ko, K.-M. Chang, and G.-J. Hwang, “A fiber-optic reflective displacement micrometer,” Sens. Actuators A Phys. 49(1–2), 51–55 (1995). [CrossRef]  

12. J. A. Powell, “A simple two fiber optical displacement sensor,” Rev. Sci. Instrum. 45(2), 302–303 (1974). [CrossRef]  

13. V. Trudel and Y. St-Amant, “One- and two-dimensional single-mode differential fiber-optic displacement sensor for submillimeter measurements,” Appl. Opt. 47(8), 1082–1089 (2008). [PubMed]   [CrossRef]  

14. http://www.efunda.com/DesignStandards/sensors/fotonic/fotonic_theory.cfm 36.

15. The distance range may be extended by collimating the light from the transmitting fiber; see W. Shen, X. Wu, H. Meng, G. Zhang, and X. Huang, “Long distance fiber-optic displacement sensor based on fiber collimator,” Rev. Sci. Instrum. 81(12), 123104 (2010). [PubMed]   [CrossRef]  

16. P. Li, H. Zhang, Y. Zhao, and L.-Z. Yang, “New compensation method of an optical fiber reflective displacement sensor,” Proc. SPIE 3241, 474–476 (1997). [CrossRef]  

17. G. Berkovic, S. Zilberman, and E. Shafir, “Size effect in fiber optic displacement sensors,” in Optical Sensors, OSA Technical Digest (online) (Optical Society of America, 2012) SM4F.6.

18. J. Liu, K. Yamazaki, Y. Zhou, and S. Matsumiya, “A reflective fiber optic sensor for surface roughness in-process measurement,” J. Manuf. Sci. Eng. 124(3), 515–522 (2002). [CrossRef]  

19. G. J. Jako, K. E. Hickman, L. A. Maroti, and S. Holly, “Recording of the movement of the human basilar membrane,” J. Acoust. Soc. Am. 41(6), 1578–9999 (1967). [CrossRef]  

20. M. Johnson, “Fiber displacement sensors for metrology and control,” Opt. Eng. 24, 961–965 (1985).

21. J. Zheng and S. Albin, “Self-referenced reflective intensity modulated fiber optic displacement sensor,” Opt. Eng. 38(2), 227–232 (1999). [CrossRef]  

22. C. P. Cockshott and S. J. Pacaud, “Compensation of an optical fibre reflective sensor,” Sens. Actuators 17(1–2), 167–171 (1989). [CrossRef]  

23. Y. Libo and Q. Anping, “Fiber-optic diaphragm pressure sensor with automatic intensity compensation,” Sens. Actuators A Phys. 28(1), 29–33 (1991). [CrossRef]  

24. A. Rostami, M. Noshad, H. Hedayati, A. Ghanbari, and F. Janabi-Sharifi, “A novel and high-precision optical displacement sensor,” Int. J. Comput. Sci. Network Security 7, 311–316 (2007).

25. See, for example, “Thales,” http://en.wikipedia.org/wiki/Thales.

26. Z. Ji and M. C. Leu, “Design of optical triangulation devices,” Opt. Laser Technol. 21(5), 339–341 (1989). [CrossRef]  

27. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

28. R. G. Dorsch, G. Häusler, and J. M. Herrmann, “Laser triangulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33(7), 1306–1314 (1994). [PubMed]   [CrossRef]  

29. M. Rioux, “Laser range finder based on synchronized scanners,” Appl. Opt. 23(21), 3837–3844 (1984). [PubMed]   [CrossRef]  

30. K.-C. Fan, “A non-contact automatic measurement for free-form surface profiles,” Comput. Integrated Manuf. Syst. 10(4), 277–285 (1997). [CrossRef]  

31. Y. Yakimovsky and R. Cunningham, “A system for extracting three-dimensional measurements from a stereo pair of TV cameras,” Comput. Graphics Image Process. 7(2), 195–210 (1978). [CrossRef]  

32. J. S. Massa, G. S. Buller, A. C. Walker, S. Cova, M. Umasuthan, and A. M. Wallace, “Time-of-flight optical ranging system based on time-correlated single-photon counting,” Appl. Opt. 37(31), 7298–7304 (1998). [PubMed]   [CrossRef]  

33. J. Pehkonen, P. Palojärvi, and J. Kostamovaara, “Receiver channel with resonance-based timing detection for a laser range finder,” IEEE Trans. Circ. Syst. 53(3), 569–577 (2006). [CrossRef]  

34. D. Nitzan, A. E. Brain, and R. O. Duda, “The measurement and use of registered reflectance and range data in scene analysis,” Proc. IEEE 65(2), 206–220 (1977). [CrossRef]  

35. P. J. Besl, “Active optical range imaging sensors,” Mach. Vis. Appl. 1(2), 127–152 (1988). [CrossRef]  

36. R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001). [CrossRef]  

37. A. D. Payne, A. A. Dorrington, M. J. Cree, and D. A. Carnegie, “Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras,” Appl. Opt. 49(23), 4392–4403 (2010). [PubMed]   [CrossRef]  

38. Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt, “3D shape scanning with a time-of-flight camera,” in 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 1173–1180.

39. J. E. Nettleton, B. W. Schilling, D. N. Barr, and J. S. Lei, “Monoblock laser for a low-cost, eyesafe, microlaser range finder,” Appl. Opt. 39(15), 2428–2432 (2000). [PubMed]   [CrossRef]  

40. C. T. Allen, K. Shi, and R. G. Plumb, “The use of ground-penetrating radar with a cooperative target,” IEEE Geosci. Remote Sensing 36(5), 1821–1825 (1998). [CrossRef]  

41. B. Saleh, Introduction to Subsurface Imaging (Cambridge University Press, 2011), p. 38.

42. H.-J. Jordan, M. Wegner, and H. Tiziani, “Highly accurate non-contact characterization of engineering surfaces using confocal microscopy,” Meas. Sci. Technol. 9(7), 1142–1151 (1998). [CrossRef]  

43. L. Yang, G. Wang, J. Wang, and Z. Xu, “Surface profilometry with a fibre optical confocal scanning microscope,” Meas. Sci. Technol. 11(12), 1786–1791 (2000). [CrossRef]  

44. H. J. Tiziani and H.-M. Uhde, “Three-dimensional image sensing by chromatic confocal microscopy,” Appl. Opt. 33(10), 1838–1843 (1994). [PubMed]   [CrossRef]  

45. J. Cohen-Sabban, J. Gaillard-Groleas, and P. J. Crepin, “Extended-field confocal imaging for 3D surface sensing,” Proc. SPIE 5252, 366–371 (2004). [CrossRef]  

46. J. R. Garzón, J. Meneses, G. Tribillion, T. Gharbi, and A. Plata, “Chromatic confocal microscopy by means of continuum light generated through a standard single mode fiber,” J. Opt. A, Pure Appl. Opt. 6(6), 544–548 (2004). [CrossRef]  

47. T. Dabbs and M. Glass, “Fiber-optic confocal microscope: FOCON,” Appl. Opt. 31(16), 3030–3035 (1992). [PubMed]   [CrossRef]  

48. R. Juškaitis and T. Wilson, “Imaging in reciprocal fibre-optic based confocal scanning microscopes,” Opt. Commun. 92(4–6), 315–325 (1992). [CrossRef]  

49. E. Shafir and G. Berkovic, “Expanding the realm of fiber optic confocal sensing for probing position, displacement, and velocity,” Appl. Opt. 45(30), 7772–7777 (2006). [PubMed]   [CrossRef]  

50. E. Shafir and G. Berkovic, “Multi-wavelength fiber optic displacement sensing,” Proc. SPIE 5952, 59520X (2005). [CrossRef]  

51. K. Shi, S. H. Nam, P. Li, S. Yin, and Z. Liu, “Wavelength division multiplexed confocal microscopy using supercontinuum,” Opt. Commun. 263(2), 156–162 (2006). [CrossRef]  

52. G. Berkovic, E. Shafir, M. A. Golub, M. Bril, and V. Shurman, “Multiple-fiber and multiplewavelength confocal sensing with diffractive optical elements,” IEEE Sensors 8(7), 1089–1092 (2008). [CrossRef]  

53. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (Wiley, 1991).

54. A. Koch and R. Ulrich, “Fiber-optic displacement sensor with 0.02 µm resolution by white-light interferometry,” Sens. Actuators A Phys. 25(1-3), 201–207 (1990). [CrossRef]  

55. H.-T. Shang, “Chromatic dispersion measurement by white-light interferometry on metre-length single-mode optical fibre,” Electron. Lett. 17(17), 603–605 (1981). [CrossRef]  

56. Y.-J. Rao and D. A. Jackson, “Recent progress in fibre optic low-coherence interferometry,” Meas. Sci. Technol. 7(7), 981–999 (1996). [CrossRef]  

57. A. F. Fercher, “Optical coherence tomography,” J. Biomed. Opt. 1(2), 157–173 (1996). [CrossRef]  

58. J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quantum Electron. 5(4), 1205–1215 (1999). [CrossRef]  

59. W. Drexler, “Ultrahigh-resolution optical coherence tomography,” J. Biomed. Opt. 9(1), 47–74 (2004). [PubMed]   [CrossRef]  

60. C. Pitris, M. E. Brezinski, B. E. Bouma, G. J. Tearney, J. F. Southern, and J. G. Fujimoto, “High resolution imaging of the upper respiratory tract with optical coherence tomography: a feasibility study,” Am. J. Respir. Crit. Care Med. 157(5 Pt 1), 1640–1644 (1998). [PubMed]  

61. W. J. Walecki, A. Pravdivtsev, M. Santos II, and A. Koo, “High-speed high-accuracy fiber optic low-coherence interferometry for in situ grinding and etching process monitoring,” Proc. SPIE 6293, 62930D (2006). [CrossRef]  

62. M. L. Dufour, G. Lamouche, S. Vergnole, B. Gauthier, C. Padioleau, M. Hewko, S. Lévesque, and V. Bartulovic, “Surface inspection of hard to reach industrial parts using low coherence interferometry,” Proc. SPIE 6343, 63431Z (2006). [CrossRef]  

63. B. L. Danielson and C. Y. Boisrobert, “Absolute optical ranging using low coherence interferometry,” Appl. Opt. 30(21), 2975–2979 (1991). [PubMed]   [CrossRef]  

64. E. Shafir, M. Shtilman, E. Naor, and G. Berkovic, “Thermally independent fibre optic absolute distance measurement system based on white light interferometry,” IET Optoelectron. 5(2), 68–71 (2011). [CrossRef]  

65. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [PubMed]   [CrossRef]  

66. M. A. Choma, K. Hsu, and J. A. Izatt, “Swept source optical coherence tomography using an all-fiber 1300 nm ring laser source,” J. Biomed. Opt. 10(4), 044009 (2005). [CrossRef]  

67. T. Klein, W. Wieser, C. M. Eigenwillig, B. R. Biedermann, and R. Huber, “Megahertz OCT for ultrawide-field retinal imaging with a 1050 nm Fourier domain mode-locked laser,” Opt. Express 19(4), 3044–3062 (2011). [PubMed]   [CrossRef]  

68. A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S. Y. Elzaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Opt. Commun. 117(1–2), 43–48 (1995). [CrossRef]  

69. C. K. Hitzenberger, P. Trost, P. W. Lo, and Q. Y. Zhou, “Three-dimensional imaging of the human retina by high-speed optical coherence tomography,” Opt. Express 11(21), 2753–2761 (2003). [PubMed]   [CrossRef]  

70. P. Patwari, N. J. Weissman, S. A. Boppart, C. Jesser, D. Stamper, J. G. Fujimoto, and M. E. Brezinski, “Assessment of coronary plaque with optical coherence tomography and high-frequency ultrasound,” Am. J. Cardiol. 85(5), 641–644 (2000). [PubMed]   [CrossRef]  

71. G. J. Tearney, S. A. Boppart, B. E. Bouma, M. E. Brezinski, N. J. Weissman, J. F. Southern, and J. G. Fujimoto, “Scanning single-mode fiber optic catheter-endoscope for optical coherence tomography,” Opt. Lett. 21(7), 543–545 (1996). [PubMed]   [CrossRef]  

72. J. G. Fujimoto, C. Pitris, S. A. Boppart, and M. E. Brezinski, “Optical coherence tomography: an emerging technology for biomedical imaging and optical biopsy,” Neoplasia 2(1/2), 9–25 (2000). [PubMed]   [CrossRef]  

73. D. Stifter, “Beyond biomedicine: a review of alternative applications and developments for optical coherence tomography,” Appl. Phys. B 88(3), 337–357 (2007). [CrossRef]  

74. G. Sirat and D. Psaltis, “Conoscopic holography,” Opt. Lett. 10(1), 4–6 (1985). [PubMed]   [CrossRef]  

75. Y. Malet and G. Y. Sirat, “Conoscopic holography application: multipurpose rangefinders,” J. Opt. 29(3), 183–187 (1998). [CrossRef]  

76. W. Hortschitz, H. Steiner, M. Sachse, M. Stifter, F. Kohl, J. Schalko, A. Jachimowicz, F. Keplinger, and T. Sauter, “An optical in-plane MEMS vibration sensor,” IEEE Sens. J. 11(11), 2805–2812 (2011). [CrossRef]  

77. E. Shafir, G. Berkovic, Y. Horovitz, G. Appelbaum, E. Moshe, E. Horovitz, A. Skutelski, M. Werdiger, L. Perelmutter, and M. Sudai, “Noncontact ballistic motion measurement using a fiber-optic confocal sensor,” J. Appl. Phys. 101(9), 093107 (2007). [CrossRef]  

78. Y. Yeh and H. Z. Cummins, “Localized fluid flow measurements with an He–Ne laser spectrometer,” Appl. Phys. Lett. 4(10), 176–178 (1964). [CrossRef]  

79. J. W. Foreman, E. W. George, and R. D. Lewis, “Measurement of localized flow velocities in gases with a laser Doppler flowmeter,” Appl. Phys. Lett. 7(4), 77–78 (1965). [CrossRef]  

80. R. K. Raney, “Synthetic aperture imaging radar and moving targets,” IEEE Trans. Aerosp. Electron. Syst. AES-7(3), 499–505 (1971). [CrossRef]  

81. V. Gusmeroli and M. Martinelli, “Distributed laser Doppler velocimeter,” Opt. Lett. 16(17), 1358–1360 (1991). [PubMed]   [CrossRef]  

82. A. P. Shepherd and G. L. Riedel, “Continuous measurement of intestinal mucosal blood flow by laser-Doppler velocimetry,” Am. J. Physiol. 242(6), G668–G672 (1982). [PubMed]  

83. K. A. Browning and R. Wexler, “The determination of kinematic properties of a wind field using Doppler radar,” J. Appl. Meteorol. 7(1), 105–113 (1968). [CrossRef]  

84. J. W. Bilbro, “Atmospheric laser Doppler velocimetry—An overview,” Opt. Eng. 19, 533–542 (1980).

85. M. Harris, G. Constant, and C. Ward, “Continuous-wave bistatic laser Doppler wind sensor,” Appl. Opt. 40(9), 1501–1506 (2001). [PubMed]   [CrossRef]  

86. O. T. Strand, D. R. Goosman, C. Martinez, T. L. Whitworth, and W. W. Kuhlow, “Compact system for high-speed velocimetry using heterodyne techniques,” Rev. Sci. Instrum. 77(8), 083108 (2006). [CrossRef]  

87. L. M. Barker and R. E. Hollenbach, “Laser interferometer for measuring high velocities of any reflecting surface,” J. Appl. Phys. 43(11), 4669–4675 (1972). [CrossRef]  

88. W. F. Hemsing, “Velocity sensing interferometer (VISAR) modification,” Rev. Sci. Instrum. 50(1), 73–78 (1979). [PubMed]   [CrossRef]  

89. P. Castellini, M. Martarelli, and E. P. Tomasini, “Laser Doppler vibrometry: development of advanced solutions answering to technology’s needs,” Mech. Syst. Signal Process. 20(6), 1265–1285 (2006). [CrossRef]  

90. R. Bogue, “Three-dimensional measurements: a review of technologies and applications,” Sensor Rev. 30(2), 102–106 (2010). [CrossRef]  

91. C. Cristalli, N. Paone, and R. M. Rodríguez, “Mechanical fault detection of electric motors by laser vibrometer and accelerometer measurements,” Mech. Syst. Signal Process. 20(6), 1350–1361 (2006). [CrossRef]  

92. C. Polhemus, “Two-wavelength interferometry,” Appl. Opt. 12(9), 2071–2074 (1973). [PubMed]   [CrossRef]  

93. K. Alzahrani, D. Burton, F. Lilley, M. Gdeisat, F. Bezombes, and M. Qudeisat, “Absolute distance measurement with micrometer accuracy using a Michelson interferometer and the iterative synthetic wavelength principle,” Opt. Express 20(5), 5658–5682 (2012). [PubMed]   [CrossRef]  

94. D. Xiaoli and S. Katuo, “High-accuracy absolute distance measurement by means of wavelength scanning heterodyne interferometry,” Meas. Sci. Technol. 9(7), 1031–1035 (1998). [CrossRef]  

95. P. A. Coe, D. F. Howell, and R. B. Nickerson, “Frequency scanning interferometry in ATLAS: remote, multiple, simultaneous and precise distance measurements in a hostile environment,” Meas. Sci. Technol. 15(11), 2175–2187 (2004). [CrossRef]  

96. J. A. Stone, A. Stejskal, and L. Howard, “Absolute interferometry with a 670-nm external cavity diode laser,” Appl. Opt. 38(28), 5981–5994 (1999). [PubMed]   [CrossRef]  

97. F. Pollinger, K. Meiners-Hagen, M. Wedde, and A. Abou-Zeid, “Diode-laser-based high-precision absolute distance interferometer of 20 m range,” Appl. Opt. 48(32), 6188–6194 (2009). [PubMed]   [CrossRef]  

98. G. Beheim and K. Fritsch, “Remote displacement measurements using a laser diode,” Electron. Lett. 21(3), 93–94 (1985). [CrossRef]  

99. K. Määtta, J. Kostamovaara, and R. Myllylä, “Profiling of hot surfaces by pulsed time-of-flight laser range finder techniques,” Appl. Opt. 32(27), 5334–5347 (1993). [PubMed]   [CrossRef]  

100. N. Satyan, A. Vasilyev, G. Rakuljic, V. Leyva, and A. Yariv, “Precise control of broadband frequency chirps using optoelectronic feedback,” Opt. Express 17(18), 15991–15999 (2009). [PubMed]   [CrossRef]  

101. E. Shafir and G. Berkovic, “Compact fibre optic probe for simultaneous distance and velocity determination,” Meas. Sci. Technol. 12, 943–947 (2001).

102. H.-J. Yang, J. Deibel, S. Nyberg, and K. Riles, “High-precision absolute distance and vibration measurement with frequency scanned interferometry,” Appl. Opt. 44(19), 3937–3944 (2005). [PubMed]   [CrossRef]  

103. S. Donati, G. Giuliani, and S. Merlo, “Laser diode feedback interferometer for measurement of displacements without ambiguity,” IEEE J. Quantum Electron. 31(1), 113–119 (1995). [CrossRef]  

104. F. Gouaux, N. Servagent, and T. Bosch, “Absolute distance measurement with an optical feedback interferometer,” Appl. Opt. 37(28), 6684–6689 (1998). [PubMed]   [CrossRef]  

105. G. Giuliani, M. Norgia, S. Donati, and T. Bosch, “Laser diode self-mixing technique for sensing applications,” J. Opt. A, Pure Appl. Opt. 4(6), S283–S294 (2002). [CrossRef]  

106. S. Donati, “Developing self-mixing interferometry for instrumentation and measurements,” Laser Photonics Rev. 6(3), 393–417 (2012). [CrossRef]  

107. L. Scalise, Y. Yu, G. Giuliani, G. Plantier, and T. Bosch, “Self-mixing laser diode velocimetry: application to vibration and velocity measurement,” IEEE Trans. Instrum. Meas. 53(1), 223–232 (2004). [CrossRef]  

108. F. P. Mezzapesa, L. Columbo, M. Brambilla, M. Dabbicco, A. Ancona, T. Sibillano, F. De Lucia, P. M. Lugarà, and G. Scamarcio, “Simultaneous measurement of multiple target displacements by self-mixing interferometry in a single laser diode,” Opt. Express 19(17), 16160–16173 (2011). [PubMed]   [CrossRef]  

109. D. Guo and M. Wang, “Self-mixing interferometry based on a double-modulation technique for absolute distance measurement,” Appl. Opt. 46(9), 1486–1491 (2007). [PubMed]   [CrossRef]  

110. M. Norgia, G. Giuliani, and S. Donati, “Absolute distance measurement with improved accuracy using laser diode self-mixing interferometry in a closed loop,” IEEE Trans. Instrum. Meas. 56(5), 1894–1900 (2007). [CrossRef]  

111. W. W. Morey, G. Meltz, and W. H. Glenn, “Fiber optic Bragg grating sensors,” Proc. SPIE 1169, 98–107 (1989).

112. A. Othonos, “Fiber Bragg gratings,” Rev. Sci. Instrum. 68(12), 4309–4341 (1997). [CrossRef]  

113. S. Zhang, S. B. Lee, X. Fang, and S. S. Choi, “In-fiber grating sensors,” Opt. Lasers Eng. 32(5), 405–418 (1999). [CrossRef]  

114. L. Ren, G. Song, M. Conditt, P. C. Noble, and H. Li, “Fiber Bragg grating displacement sensor for movement measurement of tendons and ligaments,” Appl. Opt. 46(28), 6867–6871 (2007). [PubMed]   [CrossRef]  

115. T. Thiel, J. Meissner, and U. Kliebold, “Autonomous crack response monitoring on civil structures with fiber Bragg grating displacement sensors,” Proc. SPIE 5855, 1068–1071 (2005). [CrossRef]  

116. S. Rapp, L.-H. Kang, J.-H. Han, U. C. Mueller, and H. Baier, “Displacement field estimation for a two-dimensional structure using fiber Bragg grating sensors,” Smart Mater. Struct. 18(2), 025006 (2009). [CrossRef]  

117. X. Dong, X. Yang, C.-L. Zhao, L. Ding, P. Shum, and N. Q. Ngo, “A novel temperature insensitive fiber Bragg grating sensor for displacement measurement,” Smart Mater. Struct. 14(7-N), 10 (2005). [CrossRef]  

118. J. H. Ng, X. Zhou, X. Yang, and J. Hao, “A simple temperature-insensitive fiber Bragg grating displacement sensor,” Opt. Commun. 273(2), 398–401 (2007). [CrossRef]  

119. P. Wang, Y. Semenova, Q. Wu, and G. Farrell, “A bend loss-based singlemode fiber microdisplacement sensor,” Microw. Opt. Technol. Lett. 52(10), 2231–2235 (2010). [CrossRef]  

120. P. Wang, G. Brambilla, Y. Semenova, Q. Wu, and G. Farrell, “A simple ultrasensitive displacement sensor based on a high bend loss single-mode fibre and a ratiometric measurement system,” J. Opt. 13(7), 075402 (2011). [CrossRef]  

121. Q. Wu, A. M. Hatta, P. Wang, Y. Semenova, and G. Farrell, “Use of a bent single SMS fiber structure for simultaneous measurement of displacement and temperature sensing,” IEEE Photon. Technol. Lett. 23(2), 130–132 (2011). [CrossRef]  

122. D. Litwin, J. Galas, S. Sitarek, B. Surma, B. Piatkowski, and A. Miros, “Temperature influence in confocal techniques for a silicon wafer testing,” Proc. SPIE 6585, 68050V (2007).

aop-4-4-441-i001 Garry Berkovic (member OSA) was born in 1955. He received the B.Sc. degree in chemistry from the University of Melbourne, Melbourne, Australia, and the Ph.D. degree from the Weizmann Institute of Science, Israel, in 1983. After postdoctoral research at the University of California, Berkeley, he was on the faculty of the Weizmann Institute of Science. Since 1998, he has been at the Soreq NRC, Yavne, Israel. He has published more than 100 papers, mainly in the fields of nonlinear optics, spectroscopy, advanced materials, and optical-based sensors.

aop-4-4-441-i002 Ehud Shafir (member OSA) was born in 1955. He received the B.Sc., M.Sc., and Ph.D. degrees from Hebrew University, Jerusalem, Israel, in 1982, the Weizmann Institute of Science, Rehovot, Israel, in 1985, and the Tel-Aviv University, Tel-Aviv, Israel in 1991, respectively. Since 1985, he has been with the Soreq NRC, Yavne, Israel. He has spent periods as a Visiting Research Scientist at the Norwegian Institute of Technology, Trondheim, Norway, the Optoeletronics Research Centre, Southampton, U.K., and El-Op, Electro-Optic Industries, Rehovot, Israel. His main interests lie in the fields of fiber optic sensors, smart structures, and structural health monitoring.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Figure 1
Figure 1 Illustration of the relationship between an absolute distance measurement and derived quantities of displacement, surface profile speed, and vibration.
Figure 2
Figure 2 Illustration of a fiber optic intensity sensor for distance measurements. (a) Two-fiber sensor, showing the object and the operational principle (b) Various geometries for fiber bundle sensor heads, copied with permission from [6].
Figure 3
Figure 3 Typical signal versus distance response for an intensity-based fiber optic displacement sensor (copied with permission from [14]).
Figure 4
Figure 4 Demonstration of possible sensitivity of intensity-based fiber optic displacement sensor to the tilt angle of specular or shiny objects (e.g., a mirror). The lower illustration shows why a mirror at distance d will specularly reflect more light into the receiving fiber at a tilt angle tan θ = a / d than at normal incidence. a, separation of fiber centers.
Figure 5
Figure 5 Normalized measured responses of three different fiber sensors at constant distance of 1.5 mm from a machined aluminum metal target (with <3  µ m surface roughness), while the target is translated laterally. The nominal distance to the target remains constant, but different areas of the target are exposed to the input light.
Figure 6
Figure 6 Surface roughness model, showing light rays reflected specularly according to the local surface normal. The receiving fiber(s) collect different amounts of light from different sections of the surface.
Figure 7
Figure 7 The effect of lateral translation of various target objects relative to the sensor with a single-mode transmitting fiber and a 100 µm core receiving fiber. Results are shown for three target objects: a mirror, a standard white scattering surface (Labsphere USRS-99-010), and the machined metal surface of Fig. 5.
Figure 8
Figure 8 Principle of optical triangulation sensor. The unknown distance, D, is determined from the known distances E , F and the measured value of G—the distance to the pixel in the position sensitive detector (PSD) recording the image of the laser spot on the measured object.
Figure 9
Figure 9 (a) Triangulation based on measurements of angles of view θ P and θ Q from two known observation points, P and Q. The object is located at the intersection of the two lines drawn. (b) Triangulation based on measurement of distances Z P and Z Q from two known observation points, P and Q. The object is located at the intersection of the two arcs drawn.
Figure 10
Figure 10 (a) Pulsed time-of-flight measurement showing three regimes where the time of flight is (i) much longer, (ii) similar to, and (iii) shorter than the pulse width. For nanosecond pulses these cases correspond to distances (in air) of >50 m, a few meters, and <1 m, respectively. (b) Intensity modulated time-of-flight is an alternative method suitable for distances in range (ii). The phase shift is indicated by the double-headed arrow.
Figure 11
Figure 11 Principle of fiber optic monochromatic and polychromatic confocal sensing. (i) Monochromatic confocal sensor with an object at the image plane. (ii) Monochromatic confocal sensor with an object displaced from the image plane (adapted from [49]). (iii) Polychromatic sensor where the image plane position varies with wavelength and a different wavelength will satisfy the confocal geometry for each object position.
Figure 12
Figure 12 Basic setup for WLI and typical interference pattern as a function of the reference arm mirror movement. A low-coherence source is used, in this case a fiber-coupled 1.5  µ m superluminescent diode with 60 nm spectral width. The interferogram shows an oscillation period of half the wavelength, and width corresponding to the coherence length of the source.
Figure 13
Figure 13 Measurement of a vibrating object by chromatic confocal sensor.
Figure 14
Figure 14 Demonstration of fiber-optic-assisted Doppler velocimetry. (a) The experimental setup, using a pigtailed coherent 1.5 µm laser source and a moving mirror. (b) Detector output measured by an oscilloscope for a motion of 100 µm/s. (c) Detector output measured by an electronic spectrum analyzer for a motion of 100 µm/s (black), 50 µm/s (blue), and 25 µm/s (red).
Figure 15
Figure 15 Schematic diagram showing how a FBG can be used to monitor displacement of the object. The red dots denote that the fiber is glued as shown to the object and a fixed support.
Figure 16
Figure 16 Comparison of typical resolutions and working distances of the optical distance measurement techniques discussed in this article. The entry for triangulation is divided into a filled region, corresponding to measurement from a single observation point, and a shaded region for multiple observation points.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

l c = λ 2 Δ λ ,
Δ f f = 2 ν c or λ Δ f = 2 ν .
L = ( N + ϕ / 2 π ) λ .
n ( λ 0 , T 0 ) = n ( λ 0 + Δ λ , T 0 + Δ T ) ,
Δ λ n λ = Δ T n T .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.