Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multiple-wavelength range-gated active imaging applied to the evaluation of simultaneous movement of millimeter-size objects moving in a given volume

Open Access Open Access

Abstract

The combined multiple-wavelength range-gated active imaging (WRAI) principle is able to determine the position of a moving object in a four-dimensional space and to deduce its trajectory and its speed independently of the video frequency. However, when the scene size is reduced and the objects have a millimeter size, the temporal values intervening on the depth of the visualized zone in the scene cannot be reduced further because of technological limitations. To improve the depth resolution, the illumination type of the juxtaposed style of this principle has been modified. Therefore, it was important to evaluate this new context in the case of millimeter-size objects moving simultaneously in a reduced volume. Based on the rainbow volume velocimetry method, the combined WRAI principle was studied in accelerometry and velocimetry with four-dimensional images of millimeter-size objects. This basic principle combining two wavelength categories determines the depth of moving objects in the scene with the warm color category and the precise moment of moving objects position with the cold color category. The difference in this new, to the best of our knowledge, method is at the level of the scene illumination, which is obtained transversally by a pulsed light source having a wide spectral band limited in warm colors in order to get a better depth resolution. For cold colors, the illumination with pulsed beams of distinct wavelengths remains unchanged. Thus, it is possible independently of the video frequency to know from a single recorded image the trajectory, the speed, and the acceleration of millimeter-size objects moving simultaneously in 3D space and also the chronology of their passages. The experimental tests validated this modified multiple-wavelength range-gated active imaging method and confirmed the possibility of avoiding confusion when the object trajectories intersect.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

The multiple-wavelength range-gated active imaging (WRAI) is a visualization technique using an imaging sensor array and its own illumination source (laser). In contrast with other methods such as laser scanning, active imaging directly displays a two-dimensional image of the scene. In applications, the active imaging is used to scan in depth a scene and to restore this scene in 3D [1]. Every recorded space slice is selected according to the aperture delay of the camera and is visualized according to the light pulse width and the aperture time of the camera [2]. Furthermore, as this technique is range-gated, it is possible to observe an object behind scattering environments [3]. Recently, instead of using a single wavelength and of being dependent of the video frequency by recording one visualized slice per image, it became possible with the WRAI principle to restore the 3D scene directly in a single image at the moment of recording with a video camera [46]. Thus, each emitted light pulse having a different wavelength corresponds to a visualized slice at a different distance in the scene. This operating mode corresponds to the juxtaposed style of the WRAI principle where the wavelengths are juxtaposed one behind the other, thereby restoring the 3D of the scene. The second style of the principle is the superimposed style [7]. In this case, the wavelengths correspond to precise moments. Each moment is directly recorded in a single image in which each wavelength froze in time a position of the moving object like a stroboscopic recording system [8] but with the difference that it is possible to directly identify the chronological positions thanks to the chronological order of the wavelengths. Recently, by combining these two styles, it has been shown that it was possible to know the position of a moving object in a four-dimensional space represented by a single image and to deduce its trajectory and its speed [9]. The fact that in the juxtaposed style of the WRAI principle the depth of the visualized zones in the scene is dependent on the width of the laser pulse and the aperture time of the camera, its resolution was examined in relation to the technological limit of these minimum time values. At this limit, the scene scale was millimetric. To deduce the different three-dimensional parameters related to the movement of the objects of this size, the direction of scene illumination in the juxtaposed style had to be modified. To achieve this modification, it was important to situate beforehand the three-dimensional measurement in velocimetry in a more general context. Thus, it can be performed with methods deducing the third dimension [10], such as stereoscopy [1114] and sequential tomography [15], or methods giving it directly, such as holography [1618] and multiple tomography [1921]. If different types of illumination often remained monochrome [22], introducing color in the latter method gave an interesting advantage such as color-coded particle image velocimetry [23] and particle tracking velocimetry [24]. Thus, from the color observed on the object, its position in depth could be directly known [20]. Other methods also using color, such as shadowgraphy [25], often tend to be used for transparent objects [26,27]. By increasing the number of colors, it was possible to measure a deeper volume of particles or to improve the depth resolution. It was for this purpose that the rainbow volume velocimetry method (RVV) was proposed [28] by decomposing the different colors of white light (rainbow) to obtain a maximum color variety [29]. However, if the depth could directly be identified in the scene, the trajectory could only be deduced from the video images [30,31]. The instantaneous behavior was not observable, and the direction of trajectories was not automatically known, especially when the object trajectories were random and a priori unknown. Unless the two frames of the image were combined, but in this case, the phenomenon became dependent on the video frequency [32]. The dependence on this video frequency was also found in accelerometry [33] despite the use of color [34]. In addition, to measure the acceleration, the object trajectory should remain in a thin slice perpendicular to the line of sight of the camera or in the mean centerline axial acceleration profile [35], which did not allow a measurement in a volume. Based on the combined WRAI principle, the three-dimensional measurement in velocimetry and in accelerometry of objects was carried out from four-dimensional imaging. Contrary to [9], using a limited number of light pulses in the juxtaposed style, the illumination from a pulsed white light source increased this number after its dispersion through a linear variable bandpass filter. Thus, it was possible to identify objects in a deeper volume with a better resolution. On the other hand, in this case, the range-gating for this illumination must be perpendicular to the vision of the scene, similar to the RVV method. Therefore, we propose to study the possibility of making a three-dimensional measurement in velocimetry and in accelerometry of millimeter-size objects moving simultaneously in a four-dimensional space with the chronology of their positions from a single image independently of the video frequency [36].

In Section 2, we present the principle of four-dimensional imaging based on the combined WRAI principle applied to the simultaneous movement of millimeter-size objects. The experimental results in Section 3 are presented and evaluated from the multi-wavelength imaging applied to velocimetry and accelerometry by calibrating previously the optical setup.

2. PRINCIPLE

As each style of the WRAI principle uses multiple wavelengths, it was important that they do not overlap one another in order to extract good results. To differentiate the wavelengths of each style in the combined WRAI method, they were classified into two color categories [37]. The warm colors ($H$) correspond to the depth with juxtaposed style and the cold colors ($C$) to the time with superimposed style. The use of $H$ and $C$ color wavelengths requires the use of light sources with different wavelengths. Concerning the synchronization, a combined cycle corresponds to the emission of the series of warm color laser pulses with a cold color light pulse followed by a camera aperture. Each cycle has a different cold color light pulse. The cycle number corresponds to the number of cold color light pulses. Concerning the depth, its resolution depends on the width of the laser pulse of each $H$ wavelength and on the aperture of the camera shutter. So, to increase the resolution, it is necessary to reduce the width of the laser pulse and the shutter aperture time. In addition, the energy of the light sources must also be increased to have enough illuminance in the scene. Therefore, in order to have wavelengths very close to each other, it is practically impossible to have a sufficiently short aperture time of the shutter. In the camera type used for the WRAI principle, the minimum aperture time is 200 ps, which gives a minimum depth of 6 cm per wavelength, also corresponding to the depth resolution. If the current juxtaposed style is well suited for large objects, it requires a modification to view millimeter-size objects. In our case, since the limits of the scene are known, the illumination of warm colors can emit, provided that the space occupied by each wavelength is perpendicular to the vision field of the camera. Thus, the shutter aperture time is no longer dependent on the light pulse width of the warm colors. It is at this level that the warm color category actually differs from the classical WRAI principle and comes close to the RVV method. Indeed, the depth is no longer defined in relation to the pulse widths but to the wavelengths position of the vertical illumination. On the other hand, this illumination remains pulsed to avoid luminous trails of the moving object in the image. In order to obtain this quasi-continuous space of wavelengths in the scene, a white light beam is used and sent through a linear variable bandpass filter to disperse the output wavelengths parallel to each other. Therefore, each wavelength defines a different depth of the scene, which gives the irradiance expression arriving at the camera as a function of the distance $d$ supposing that the object reflection is Lambertian:

$$\begin{split} {{\boldsymbol g}_{\textbf{att}\_{ H}({d} )}}&={{ \sum^{{\boldsymbol N}_{\boldsymbol \lambda \boldsymbol H}}_{\boldsymbol {u=1}} }}\left\{ {{\boldsymbol A}_{\boldsymbol \lambda {\boldsymbol H\boldsymbol u}}}\cdot{\exp}\!\left( -{\boldsymbol d}\cdot{{\boldsymbol \alpha }_{\boldsymbol \lambda \boldsymbol H\boldsymbol u}} \right)\cdot{{\boldsymbol k}_{\textbf{optic}/\boldsymbol \lambda \boldsymbol H\boldsymbol u}}\cdot\left( \frac{{{{R}}_{\boldsymbol \lambda \boldsymbol H\boldsymbol u}}}{\boldsymbol \pi } \right) \right. \\&\quad \cdot\left. {\boldsymbol \delta} \!\left( {\boldsymbol d}-\left[ {{\boldsymbol d}_{\boldsymbol r\boldsymbol o}}\cdot\left( {\boldsymbol {u-1} }\right)+\boldsymbol d\boldsymbol r+\frac{{{{\boldsymbol \Delta }}_{\boldsymbol d}}}{\boldsymbol 2} \right] \right) \right\}\!, \end{split}$$
where $\lambda {H_u}$ represents the warm color wavelength for the laser pulse “ $u$,” ${{{A}}_{\lambda {H_u}}}$ the amplitude of each pulse “$u$” for a given wavelength $\lambda {H_u}$, ${N_{\lambda H}}$ the number of used warm color wavelengths, ${d_{\textit{ro}}}$ the wavelength period in distance, $dr$ the distance between the camera and the beginning of the visualized global zone, ${\Delta _d}$ the depth of a visualized zone by a single light pulse, ${k_{{\rm optic}/\lambda {H_u}}}$ the attenuation coefficient of the reception lens, ${\alpha _{\lambda {H_u}}}$ the atmospheric attenuation coefficient, and ${{{R}}_{\lambda {H_u}}}$ the reflection coefficient. These last three coefficients also depend on wavelengths ($\lambda {H_u}$ for each laser pulse “$u$”). The previous equation [Eq. (1)] can be written differently by using the Fourier transform of a shifted Dirac delta distribution:
$$\begin{split}&{{\boldsymbol g}_{\textbf{att}\_{ H}( {d} )}}\\&={\sum^{\boldsymbol{N}_{\boldsymbol{\lambda} \boldsymbol{H}}}_{\boldsymbol{u=1}}}\left\{ {{\boldsymbol A}_{\boldsymbol \lambda \boldsymbol H\boldsymbol u}}\cdot{\exp}\!\left( -{\boldsymbol d}\cdot{\boldsymbol \alpha }_{\boldsymbol \lambda \boldsymbol H\boldsymbol u} \right)\cdot{\boldsymbol k}_{\textbf{optic}/\boldsymbol{\lambda}\boldsymbol{H}\boldsymbol{u}} \right.\cdot\left( \frac{{{{R}}_{\boldsymbol \lambda \boldsymbol H\boldsymbol u}}}{\boldsymbol \pi } \right)\\&\quad\cdot \left. {{{\rm FT}}^{-\boldsymbol 1}}\!\left[{\exp}\!\left( -{2 }\pi\cdot {\boldsymbol j}{\boldsymbol \nu} \!\left[ {{\boldsymbol d}_{\boldsymbol r\boldsymbol o}}\cdot\left({\boldsymbol u\boldsymbol {-1}} \right)+{\boldsymbol d\boldsymbol r}+\frac{{{{\boldsymbol \Delta }}_{\boldsymbol d}}}{\boldsymbol 2} \right] \right) \right] \right\}\!.\end{split}$$

On the other hand, the superimposed style of the WRAI principle remains unchanged for the cold color wavelengths and the determination of the different moments. The expression of its irradiance as a function of distance [9] remains identical:

$$\begin{split}{{\boldsymbol g}_{\textbf{att}\_{C}({d})}}&={\sum^{\boldsymbol N_{\boldsymbol \lambda \boldsymbol C}}_{\boldsymbol w=1}}\left\{ {{\boldsymbol A}_{\boldsymbol \lambda \boldsymbol C\boldsymbol w}}\cdot\frac{{\exp}\!\left( -{\boldsymbol {2d}}\cdot{{\boldsymbol \alpha }_{\boldsymbol \lambda \boldsymbol C\boldsymbol w}} \right)}{{\boldsymbol d^{\boldsymbol 2}}}\cdot{\boldsymbol k}_{\textbf{optic}/\boldsymbol \lambda \boldsymbol C\boldsymbol w} \right.\\&\quad\cdot {{R}}_{\boldsymbol \lambda \boldsymbol C\boldsymbol w}\cdot\left( \frac{{{{\Delta}}_{\boldsymbol d}^{2}}}{\boldsymbol c^{2} } \right)\cdot {\boldsymbol k_{\textbf{cam}}}\cdot {{{\rm FT}}^{-\boldsymbol 1}}\left[{\rm sinc}\left({\frac{\boldsymbol {\pi \nu}\cdot {\Delta_{\boldsymbol d}}}{\boldsymbol 2}}\right)\right.\\&\quad\cdot{\rm sinc}\left({\frac{\boldsymbol {\pi \nu}\cdot {\Delta_{\boldsymbol d\cdot {\boldsymbol k_{\textbf {cam}}}}}}{\boldsymbol 2}}\right) \cdot\sum^{\boldsymbol N_{\boldsymbol \lambda \boldsymbol C}-{\boldsymbol w}}_{\boldsymbol{n=0}}{\exp}\!\left(\vphantom{\left({\frac{1+{\boldsymbol k_{\rm cam}}}{4}}\right)} -{2 }\pi\right.\\&\quad\cdot\left.\left.\!\left.{\boldsymbol j}{\boldsymbol \nu}\left[{\boldsymbol \Delta_{\boldsymbol d}\cdot}\left({\frac{{\boldsymbol 1}+{\boldsymbol k_{\textbf{ cam}}}}{\boldsymbol 4}}\right)+{\boldsymbol d\boldsymbol r}+{\boldsymbol n}\cdot{\frac{\boldsymbol c}{\boldsymbol{2}\cdot {\boldsymbol F_{\textbf{acqu}}}}}\right]\right)\right] \right\}\!.\end{split}$$
where $\lambda {C_w}$ represents the cold color wavelength for the light pulse “$w$,” ${{{A}}_{\lambda {C_w}}}$ the amplitude of each pulse “$w$” for a given wavelength $\lambda {C_w}$, ${N_{\lambda C}}$ the number of used cold color wavelengths, ${k_{{\rm cam}}}$ the multiple for the aperture time of the camera, ${F_{{\rm acqu}}}$ the acquisition frequency of the camera, ${k_{{\rm optic}/\lambda {C_w}}}$ the attenuation coefficient of the reception lens, ${\alpha _{\lambda {C_w}}}$ the atmospheric attenuation coefficient, and ${{{R}}_{\lambda {C_w}}}$ the reflection coefficient. These last three coefficients also depend on wavelengths ($\lambda {C_w}$ for each laser pulse “$w$”). The irradiance obtained from the combination of these two equations [Eqs. (2) and (3)] is given by
 figure: Fig. 1.

Fig. 1. (a) Theoretical result and (b) result obtained with the graphical method of visualizing of the combined WRAI method with quasi-continuous wavelengths for the depth and different wavelengths for the time.

Download Full Size | PDF

$${{\boldsymbol g}_{\textbf{att}({{d}} )}} = {{\boldsymbol g}_{\textbf{att}{{\_H(d)}}}} + {{\boldsymbol g}_{\textbf{att}{{\_C(d)}}.}}$$

This last equation allows to model this combined method of multi-wavelength imaging to represent the scene depth as a function of time [Fig. 1(a)]. In Fig. 1(a), the visualized zones for determining the scene depth (${{{Z}}_{D1}}{\ldots}{{{Z}}_{{Dn}}}$) appear in the same zones concerned by the different moments used (${{{Z}}_{T1}}$, ${{{Z}}_{T2}}$, ${{{Z}}_{T3}}$, and ${{{Z}}_{T4}}$). The spectral resolution has been exaggerated in Fig. 1(a) to differentiate the wavelengths on the graph. To give an idea of the principle using the different wavelengths without including the different attenuations, we employed the same graphic method as that of [38] in Fig. 1(b). The method behavior in Fig. 1(a) is well reflected in Fig. 1(b). Concerning the visualized zone, as each wavelength corresponds to a different space slice, the objects in this slice reflect only at this wavelength. The principle of this combination is based on the fact that each wavelength used must be differentiated vis-à-vis the other wavelengths used for a given image. To meet this need and on the basis of the multiplexed imaging method as already used in [4], the scene image is multiplexed in four paths before the camera. By inserting adequate spectral filters, two paths are used for warm colors and two paths for cold colors. In each color category, the spectral curve of a path is shifted along the wavelengths with respect to the spectral curve of the second path. The purpose of this spectral shift is to identify the wavelengths of used light pulses from the ratio of the figures produced by each path of the same color category. These ratios of the figures, meaning also of irradiance, eliminate the influence of reflectance and atmospheric attenuation. From the spectral response of the filters modulated with the spectral response of the camera, the ratios obtained in each category show strictly decreasing zones, which are useful for the discrimination of different wavelengths included in these zones. Therefore, the light sources used for the combined WRAI setup must belong to these zones. Thus, according to the ratio value in the useful zone, each wavelength is identified in its color category. The important factor is not the exact value of the color level but the identification of the color, which corresponds to a specified time or to a precise position. A color level variation does not induce a variation either in time or in position. The only thing that matters is its identification among the other colors.

 figure: Fig. 2.

Fig. 2. Multi-wavelength imaging setup.

Download Full Size | PDF

3. EXPERIMENTAL RESULTS

Using this combined multi-wavelength imaging method, we determined the trajectory, the speed, and the acceleration of particles, represented by rice grains moving in a scene, from the time–space rendered by the method. For that, we used the multi-wavelength imaging setup shown in Fig. 2. This optical setup was composed of a pulsed illumination representing the warm colors, a pulsed illumination representing the cold colors, and a triggerable intensified camera (Quantum Leap ${{E}}$ of Stanford Computer Optics) for the acquisition of images (Fig. 2). The spectral distribution of warm colors was obtained by a beam produced by a pulsed white light lamp passing through a linear variable bandpass filter with a spectral progression of 3.465 nm/mm. To eliminate the wavelengths associated with the cold colors and to keep the wavelengths related to the warm colors, a spectral canceller was correctly placed in this spectral distribution. As a result, the scene was illuminated with a “rainbow” beam having only warm color wavelengths. In addition, to avoid luminous trails in the image due to the moving of the rice grains, the light beams were emitted with a pulse width between 1 and 3 ms, depending on the scene. The pulsed beams of cold colors of which the wavelengths correspond to those mentioned in the previous section, were emitted alternately with a pulse width of the same order. Since the aperture time of the Quantum Leap camera was limited to a maximum of 100 µs per electrical pulse at the trigger input, a burst of electrical pulses was emitted during the time equivalent to the time width needed to illuminate the scene. To optimize this illumination with a duty cycle of 99%, the burst frequency was set at 9900 Hz. As seen in the previous section, two color categories were considered to separate the wavelengths dedicated to the depth of the scene and the wavelengths dedicated to the temporal aspect of the scene. Thus, the optical setup was carried out adapting the same model used in [4] by modifying some parts such as the multiplexing in four paths of the scene image and the appropriate spectral filters matrix. Concerning the four-way multiplexing, taking inspiration from [39] we mounted a setup with two Fresnel bi-prisms (Fig. 2). A first bi-prism was used to double the input image in a direction perpendicular to the optical axis. The second bi-prism, positioned in succession, allowed to double the two previous images in the other direction perpendicular to the optical axis. As a result, four images identical to the input image appeared at the setup output. Passing through the spectral filter matrix, each output image was filtered at the same time. The filters that made up the matrix have been modified from [4] to allow us to use commercial filters. Thus, the Cold 1 filter was implemented with a Schott BG25 bandpass filter, the Cold 2 filter with a Schott BG42 bandpass filter superimposed by an Andover 500FL07 short-wave pass filter, the Warm 1 filter was implemented with a Hoya O58 high-pass filter, and the Warm 2 filter with a Schott BG42 bandpass filter, superimposed by an Andover 550SC01 long-wave pass filter (Fig. 2).

 figure: Fig. 3.

Fig. 3. (a) Progression of warm color wavelengths as a function of the scene depth, (b) evolution of warm color ratio as a function of the scene depth, (c) evolution of scene depth as a function of the warm color ratio, (d) comparison between the estimated distance from the wavelength ratio and the real distance, (e) error curve of estimated distances.

Download Full Size | PDF

The identification of the wavelengths used in each color category was obtained in the same manner as in the WRAI principle [9]. From the ratio of the two frames in each color category showing decreasing zones, the different wavelengths included in these useful zones have been discerned. As a result, the light sources used and the spectral limits of the rainbow beam in the setup (Fig. 2) have been selected to be included in these zones. Thus, according to the ratio value in the useful zone, each wavelength is identified in its color category. In the cold zone, the obtained ratio will designate the cold color wavelength, which will designate the time value. In the warm zone, the obtained ratio will designate the warm color wavelength, which will designate the depth value. Finally, since each ratio gives a time or a distance depending on the zone, the time and the distance were determined directly according to the ratio.

A. Static Control

Before using the optical setup completely, it was important to perform different controls and calibrate it. The first test consisted of checking whether the warm color wavelengths were progressing linearly as a function of the scene depth. This was confirmed by placing the optical fiber input of a spectral analyzer on a translation stage [Fig. 3(a)]. To identify the distances according to the wavelengths obtained from the ratio of two warm color frames, a rice grain was placed on a translation stage to control its radial movement in the illuminated zone. After each rice grain progression of 0.25 mm, an image was recorded to give the ratio of two warm color frames at this distance. Thus, at the end of the course, the ratio evolution according to the distance was recorded [Fig. 3(b)]. By reversing this curve, the distance evolution could be shown according to the ratio [Fig. 3(c)]. From there, by interpolating this evolution with a rational function depending on the $\rho$ ratio, the distance corresponded to

$$d=\frac{-0.313\cdot{{\rho }^{3}}+23.43\cdot{{\rho }^{2}}-187.8\cdot\rho +776.6}{{{\rho }^{3}}+8.352\cdot{{\rho }^{2}}-89.28\cdot\rho +509.7}.$$

By comparing the distances estimated from wavelengths with the real distances [Fig. 3(d)], the obtained accuracy was estimated at ${{\pm 484}}\;{{\unicode{x00B5}{\rm m}}}$ [Fig. 3(e)]. It is important to specify that this accuracy corresponds to only one line in the scene and not the whole scene. To determine the accuracy of the entire illuminated surface, a white balance and diffuse reflectance target was placed and inclined in the scene. To ensure that the same distances are positioned on the same horizontal lines, the “rainbow beam” and the camera were oriented in such a way that the wavelengths with the same value appear on the same horizontal line. In order to eliminate the background noise and the dead pixels, it was necessary to subtract the background image without illumination from the images acquired during the tests. In addition, to eliminate the effects due to an irregularity in the spectral progression of the illumination surface, an interpolation per column on the image of warm color ratios of the inclined target was performed to give the real distance on the entire surface. From that, a matrix with coefficients of the rational interpolation function differing according to the $x$ axis has been created and used directly on the test images. To illustrate this point with a simulation, an irregularity in the spectral progression was deliberately accentuated giving a curvature in the distances from the image of the warm color ratios [Fig. 4(a)]. By interpolating per column this image with the matrix composed of the coefficients of the rational interpolation function, the effects due to the irregularity have been eliminated allowing to match the real distance on the entire image [Fig. 4(b)].

 figure: Fig. 4.

Fig. 4. (a) Simulation of an image of the warm color ratios having undergone an effect due to an irregularity in the spectral progression of the illumination surface of the inclined target giving a curvature in the distances, (b) result after the application of the coefficient matrix of the rational interpolation function on the simulated image of the warm color ratios, (c) real result obtained with the inclined target in the scene.

Download Full Size | PDF

Therefore, by following the previous instructions, the result obtained with the inclined target [Fig. 4(c)] gave an accuracy of the order of 500 µm.

Before controlling the accuracy of the different times according to the cold color wavelengths, we checked whether the selected spectral bands did not disturb each other. In more precise terms, if the spectral bands of warm colors did not interfere with the results of the cold color ratio (Table 1) and inversely if the spectral bands of the cold colors did not interfere with the results of the warm color ratio (Table 2).

Tables Icon

Table 1. Comparison of Color Ratios without and with Warm Colors

Tables Icon

Table 2. Behavior of the Average and the Standard Deviation of the Difference of Distances Estimated from the Warm Color Ratios According to Each Cold Color Wavelength

In the first case (Table 1), we can see only a very small difference by comparing the cold color ratios with and without warm colors. It does not interfere with the temporal order represented by the cold color wavelengths. In the second case (Table 2), we observe no influence due to the cold colors by observing the behavior of the average and the standard deviation of the distance difference.

B. Dynamic Control

To control the correspondence of the different times according to the cold color ratio, a rice grain was placed on a motor rotation in the scene [Fig. 5(a)]. Knowing the circular trajectory and the speed of this grain (0.3 m/s), the four pulsed LEDs were trigged alternately every 18 ms with a pulse width of the order of 1 ms to record different positions of the object during the same turn [Figs. 5(b)–5(e)].

 figure: Fig. 5.

Fig. 5. (a) Assembly of a rice grain on a motor rotation giving images across (b) the Cold 1 filter, (c) the Cold 2 filter, (d) the Warm 1 filter, and (e) the Warm 2 filter (the contrast of the images has been enhanced to get a better view of details).

Download Full Size | PDF

From the coordinates of different rice grain positions in the 3D space with their specific times [Figs. 5(b)–5(e)], its trajectory and its speed were evaluated. Observing these results, the circular movement appears correct (Fig. 6) and the speed is very close (Table 3) to 0.3 m/s. The different positions in the space respect well the position deviation of 0.5 mm estimated previously in relation to the theoretical trajectory of the rice grain.

 figure: Fig. 6.

Fig. 6. Trajectory of the rice grain mounted on the motor rotation in top view.

Download Full Size | PDF

Tables Icon

Table 3. Estimation of the Rice Grain Speed from the Coordinates of the Different Positions Obtained Every 18 ms

C. Experimental Tests

After this calibration and this control phase, the experimental tests were carried out. In the illuminated zone of the scene, the trajectory of moving rice grains was visualized. Coming from a container that was gradually depleting with the movement of a motorized disc, these rice grains were projected on an oblique plate to jump into the void and to drop into a recovery tray (Fig. 7). The purpose was to determine, with this imaging method, their real trajectories in the three dimensions as a function of time when they jumped into the void. Then, their speeds and their accelerations were evaluated according to the time.

 figure: Fig. 7.

Fig. 7. Apparatus used for projecting rice in the scene.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Projection of the trajectory of rice grains in (a) the ${{XY}}$ plane “width and height,” in (b) the ${{XZ}}$ plane “width and depth,” and in (c) the ${{YZ}}$ plane “height and depth.” (d) 2D image obtained from a single path (the image contrast has been enhanced to get a better view of details).

Download Full Size | PDF

The test results showed that, based on the recording of a single image composed of the four paths, it was possible to know the position in space of the different rice grains at specific times. For that, the depth of the rice grains was determined from the values of the warm color ratio [Figs. 8(b) and 8(c)]. The other ${{XY}}$ coordinates are given by the position of the grains in the image. From the values of the cold color ratio, the moment of recording each position of the rice grains was determined. On each curve of Figs. 8(a)–8(c), we can see that the chronology of the positions of each grain is respected in relation to its trajectory. We can also see that one grain arrives before the other in the scene. The advantage of this imaging method is clearly apparent when these results are compared with the 2D image [Fig. 8(d)] that gives the impression that both rice grains move at the same time in the same plane and in the same direction. The reality is different with the 3D image (Fig. 9), since each grain is located in space at the right time. Thus, it is possible to know the relative position of each grain in relation to each other in the time and to estimate their speeds and their accelerations in space (Fig. 9).

 figure: Fig. 9.

Fig. 9. Trajectory of rice grains as a function of time giving their speeds and their accelerations in the 3D space of the scene in a chronological manner.

Download Full Size | PDF

Starting with the hypothesis that the acceleration ${\gamma _{{{\rm grain}\_i}}}$ of each rice grain $n$ was constant, we evaluated it from their real displacements ${d_{T(i + 1)Ti}}$:

$${\gamma _{{\rm grain}\_1}} = \frac{{{d_{T3T2}} - {d_{T2T1}}}}{{{{\Delta}}_t^2}} = 19.16\;{\rm m/s}^{2},$$
$${\gamma _{{\rm grain}\_2}} = \frac{{{d_{T4T3}} - {d_{T3T2}}}}{{{{\Delta}}_t^2}} = 16.37\;{\rm m/s}^{2},$$
where $\Delta {{t}}$ represents the trigger period (difference between $T(i + 1)$ and $Ti$) equal to 9 ms.

The real displacement corresponds to the diagonal in space from the projections of the trajectory on the different planes [Figs. 8(a)–8(c)]. The initial speed was deduced from the first real displacement of the grain:

$${{V}_{\rm init\_grain\_1}}=\frac{{{d}_{T2T1}}-\frac{1}{2}\cdot{{\gamma }_{\rm grain\_1}}\cdot\Delta_{t}^{2}}{{{\Delta}_{t}}},$$
$${{V}_{\rm init\_grain\_2}}=\frac{{{d}_{T3T2}}-\frac{1}{2}\cdot{{\gamma }_{\rm grain\_2}}\cdot\Delta_{t}^{2}}{{{\Delta}_{t}}}.$$

The second speed ${V_{{2\_{\rm grain}}\_n}}$ was determined by the same equations [Eqs. (8) and (9)] but with the following real displacement. The last speed was deduced from the second speed:

$${{V}_{3\_{\rm grain}\_n}}={{\gamma }_{{\rm grain}\_n}}\cdot{{\Delta}_{t}}+{{V}_{2\_{\rm grain}\_n}}.$$

All values have been reported in Fig. 9. By observing the speed distribution and the acceleration of each rice grain (Fig. 9), we see that they remain well ordered in the scene space and they respond correctly to the movements of the grains with a consistent set of values. Consequently, these experimental results confirmed the possibility of knowing at specific times the position, speed, and acceleration of millimeter-size objects moving simultaneously in 3D space from a single image giving a four-dimensional space. Because of that, there can be no confusion when the trajectories intersect, when they are delayed, or when they are not in the same transverse plane.

4. CONCLUSION

Even though the combined WRAI principle is able to know the position of a moving object in a four-dimensional space represented by a single image and to deduce its trajectory and its speed, it was shown that it was necessary to modify the illumination type of its juxtaposed style when the dimension of the object was millimetric. This change is due to the fact that there is a technological limit in the minimum time value of the width of the laser pulse and of the aperture time of the camera. Both parameters intervene in the depth of the visualized zone in the scene. So, using the RVV method in the combined WRAI principle, we showed that it is possible to improve the depth resolution and to determine directly from a single four-dimensional image the trajectory, the speed, and the acceleration of millimeter-size objects moving in a given volume and the position chronology of each object. This image has been recorded with two different types of illumination that combine for one the warm color wavelengths representing the depths and for the other the cold color wavelengths representing the different times. Each wavelength corresponds to a specific depth or a specific time. Starting from the fact that the scene was known and remained constantly delimited in the same space, the warm colors illumination was carried out as in the method (RVV). The purpose was to increase the depth resolution. To differentiate the wavelengths of each category, the image from the scene was multiplexed in four sub-images that correspond to two sub-images per category. By inserting an appropriate spectral filter in front of each quarter, the image recorded by the camera consisted of four filtered figures of the same scene. The control of the optical setup was necessary to check the correspondence between the ratios and the distances over the entire illumination surface. In the case of an irregularity in the spectral progression, we even considered, to eliminate the effect, to create and to apply a matrix composed of the coefficients of a rational interpolation function on the image of warm color ratios. Thus, the ratio between two filtered images of each category gave values proportional to time or to depth, according to the color category. Thanks to that, it was possible to know the position of each grain in the three dimensions of space at different moments from a single recorded image, giving four dimensions in total. The tests were based on projections of rice grains. The scene was illuminated with a pulsed source having a wide spectral band limited to the warm colors and with pulsed LEDs of different wavelengths for cold colors. Each grain was located in space at a precise moment thanks to this color combination. Because the warm color wavelengths are not intermittent, it was possible to know the relative position of each grain with an accuracy of the order of 0.5 mm. Based on the positions of each rice grain at the different moments, their speeds and their accelerations, as well as the chronology of their passages, were evaluated. The distribution of these physical quantities in space kept a perfect coordination. The ability to avoid confusion such as false intersections was also clearly demonstrated. Consequently, all the results validated this combined multiple-wavelength range-gated active imaging principle completed by the rainbow volume velocimetry method in velocimetry and in accelerometry of millimeter-size objects moving simultaneously in a given volume. By specifying that this method always remains independent of the video frequency. Concerning the prospects, the analysis of the behavior of these objects in space–time with a four-dimensional image could be interesting for certain scientific fields, such as detonics.

Funding

Institut Franco-Allemand de Recherches de Saint-Louis.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. M. A. Albota, R. M. Heinrichs, D. G. Kocher, D. G. Fouche, B. E. Player, M. E. O’Brien, B. F. Aull, J. J. Zayhowski, J. Mooney, B. C. Willard, and R. R. Carlson, “Three-dimensional imaging laser radar with a photon-counting avalanche photodiode array and microchip laser,” Appl. Opt. 41, 7671–7678 (2002). [CrossRef]  

2. D. Bonnier and V. Larochelle, “A range-gated active imaging system for search and rescue, and surveillance operations,” Proc. SPIE 2744, 134–145 (1996). [CrossRef]  

3. O. Steinvall, H. Olsson, G. Bolander, C. Carlsson, and D. Letalick, “Gated viewing for target detection and target recognition,” Proc. SPIE 3707, 432–448 (1999). [CrossRef]  

4. A. Matwyschuk, “Direct method of three-dimensional imaging using the multiple-wavelength range-gated active imaging principle,” Appl. Opt. 55, 3782–3786 (2016). [CrossRef]  

5. A. Matwyschuk, “Multiple-wavelength range-gated active imaging principle in the accumulation mode for three-dimensional imaging,” Appl. Opt. 56, 682–687 (2017). [CrossRef]  

6. A. Matwyschuk, “Principe d’imagerie active à crénelage temporel multi-longueurs d’onde pour l’imagerie 3D,” Instrum. Mesure Métrologie 16, 255–260 (2017). [CrossRef]  

7. A. Matwyschuk, “Multiple-wavelength range-gated active imaging in superimposed style for moving object tracking,” Appl. Opt. 56, 7766–7773 (2017). [CrossRef]  

8. H. E. Edgerton, “Motion-picture apparatus,” U.S. patent 2,186,013 (9 January 1940).

9. A. Matwyschuk, “Combination of the two styles of the multiple-wavelength range-gated active imaging principle for four-dimensional imaging,” Appl. Opt. 59, 7670–7679 (2020). [CrossRef]  

10. R. Adrian, “Particle-imaging techniques for experimental fluid-mechanics,” Annu. Rev. Fluid Mech. 23, 261–304 (1991). [CrossRef]  

11. A. Prasad, “Stereoscopic particle image velocimetry,” Exp. Fluids 29, 103–116 (2000). [CrossRef]  

12. T. Hori and J. Sakakibara, “High-speed scanning stereoscopic PIV for 3D vorticity measurement in liquids,” Meas. Sci. Technol. 15, 1067 (2004). [CrossRef]  

13. C. Kähler, “Investigation of the spatio-temporal flow structure in the buffer region of a turbulent boundary layer by means of multiplane stereo PIV,” Exp. Fluids 36, 114–130 (2004). [CrossRef]  

14. R. Wang, X. Li, and Y. Zhang, “Analysis and optimization of the stereo-system with a four-mirror adapter,” J. Eur. Opt. Soc. 3, 1–7 (2008). [CrossRef]  

15. J. P. Prenel, R. Porcar, S. Reiniche, and G. Diemunsch, “Optical oscilloscope for three-dimensional flow visualization,” Opt. Laser Technol. 18, 208–212 (1986). [CrossRef]  

16. B. J. Thompson, J. H. Ward, and W. R. Zinky, “Application of hologram techniques for particle size analysis,” Appl. Opt. 6, 519–526 (1967). [CrossRef]  

17. S. Coëtmellec, C. Buraga-Lefebvre, D. Lebrun, and C. Özkul, “Application of in-line digital holography to multiple plane velocimetry,” Meas. Sci. Technol. 12, 1392 (2001). [CrossRef]  

18. S. Grare, D. Allano, S. Coëtmellec, G. Perret, F. Corbin, M. Brunel, G. Gréhan, and D. Lebrun, “Dual-wavelength digital holography for 3D particle image velocimetry: experimental validation,” Appl. Opt. 55, A49–A53 (2016). [CrossRef]  

19. J. P. Prenel, R. Porcar, G. Polidori, A. Texier, and M. Coutanceau, “Wavelength coding laser tomography for flow visualizations,” Opt. Commun. 91, 29–33 (1992). [CrossRef]  

20. B. Ruck, “Color-coded tomography,” in 7th International Symposium on Fluid Control, Measurement and Visualization (2003).

21. T. Casey, J. Sakakibara, and S. T. Thoroddsen, “Scanning tomographic particle image velocimetry applied to a turbulent jet,” Phys. Fluids 25, 025102 (2013). [CrossRef]  

22. C. Willert, B. Stasicki, J. Klinner, and S. Moessner, “Pulsed operation of high-power light emitting diodes for imaging flow velocimetry,” Meas. Sci. Technol. 21, 075402 (2010). [CrossRef]  

23. Y. Murai, T. Yumoto, H. J. Park, and Y. Tasaka, “Color-coded smoke PIV for wind tunnel experiments improved by eliminating optical and digital color contamination,” Exp. Fluids 62, 231 (2021). [CrossRef]  

24. T. Watamura, Y. Tasaka, and Y. Murai, “LCD-projector-based 3D color PTV,” Exp. Therm. Fluid Sci. 47, 68–80 (2013). [CrossRef]  

25. J. Menser, F. Schneider, T. Dreier, and S. A. Kaiser, “Multi-pulse shadowgraphic RGB illumination and detection for flow tracking,” Exp. Fluids 59, 90 (2018). [CrossRef]  

26. J. Klinner and C. Willert, “Tomographic shadowgraphy for three-dimensional reconstruction of instantaneous spray distributions,” Exp. Fluids 53, 531–543 (2012). [CrossRef]  

27. A. A. Aguirre-Pablo, M. K. Alarfaj, E. Q. Li, J. F. Hernández-Sánchez, and S. T. Thoroddsen, “Tomographic particle image velocimetry using smartphones and colored shadows,” Sci. Rep. 7, 3714 (2017). [CrossRef]  

28. J. P. Prenel, Y. Bailly, and M. Gbamele, “Three dimensional PSV and trajectography by means of a continuous polychromatic spectrum illumination,” in 2nd Pacific Symposium on Flow Visualization and Image Processing; PSFVIP-2, S. Mochizuki, ed. (1999), p. 77.

29. T. J. McGregor, D. J. Spence, and D. W. Coutts, “Laser-based volumetric colour-coded three-dimensional particle velocimetry,” Opt. Lasers Eng. 45, 882–889 (2007). [CrossRef]  

30. K. Laurent, M. Barthès, V. Lepiller, and Y. Bailly, “Development of a 3D particle tracking velocimetry method and its associated python-coded software for image processing,” presented at the 10th Pacific Symposium on Flow Visualization and Image Processing,Naples, Italy, June 15, 2015, paper PSFVIP 10-180.

31. J. Xiong, R. Idoughi, A. A. Aguirre-Pablo, A. B. Aljedaani, X. Dun, Q. Fu, S. T. Thoroddsen, and W. Heidrich, “Rainbow particle imaging velocimetry for dense 3D fluid velocity imaging,” ACM Trans. Graph. 36, 1–14 (2017). [CrossRef]  

32. D. Zibret, Y. Bailly, C. Cudel, and J. P. Prenel, “Direct 3D flow investigations by means of Rainbow Volumic Velocimetry (RVV),” in Pacific Symposium on Flow Visualization and Image Processing; PSFVIP-4; French Congress on Visualizations in Fluid Mechanics; FLUVISU-10 (2003), p. 11.

33. K. Christensen and R. Adrian, “Measurement of instantaneous Eulerian acceleration fields by particle image accelerometry: Method and accuracy,” Exp. Fluids 33, 759–769 (2002). [CrossRef]  

34. M. J. McPhail, M. H. Krane, A. A. Fontaine, L. Goss, and J. Crafton, “Multicolor particle shadow accelerometry,” Meas. Sci. Technol. 26, 045301 (2015). [CrossRef]  

35. L. Ding, S. Discetti, R. Adrian, and S. Gogineni, “Multiple-pulse PIV: Numerical Evaluation and Experimental Validation,” presented at the 10th International Symposium on Particle Image Velocimetry, Delft, The Netherlands, July 1, 2013, paper PIV13.

36. A. Matwyschuk, “Doppler effect in the multiple-wavelength range-gated active imaging up to relativistic speeds,” J. Opt. Soc. Am. A 39, 322–331 (2022). [CrossRef]  

37. C. Hayter, “XVII. Some systematic directions for the application of colours,” in An Introduction to Perspective (Black, Parry and Co., 1813), pp. 142–143.

38. L. F. Gillespie, “Apparent illumination as a function of range in gated, laser night-viewing systems,” J. Opt. Soc. Am. 56, 883–887 (1966). [CrossRef]  

39. S. Panigrahi, J. Fade, R. Agaisse, H. Ramachandran, and M. Alouini, “An all-optical technique enables instantaneous single-shot demodulation of images at high frequency,” Nat. Commun. 11, 549 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Theoretical result and (b) result obtained with the graphical method of visualizing of the combined WRAI method with quasi-continuous wavelengths for the depth and different wavelengths for the time.
Fig. 2.
Fig. 2. Multi-wavelength imaging setup.
Fig. 3.
Fig. 3. (a) Progression of warm color wavelengths as a function of the scene depth, (b) evolution of warm color ratio as a function of the scene depth, (c) evolution of scene depth as a function of the warm color ratio, (d) comparison between the estimated distance from the wavelength ratio and the real distance, (e) error curve of estimated distances.
Fig. 4.
Fig. 4. (a) Simulation of an image of the warm color ratios having undergone an effect due to an irregularity in the spectral progression of the illumination surface of the inclined target giving a curvature in the distances, (b) result after the application of the coefficient matrix of the rational interpolation function on the simulated image of the warm color ratios, (c) real result obtained with the inclined target in the scene.
Fig. 5.
Fig. 5. (a) Assembly of a rice grain on a motor rotation giving images across (b) the Cold 1 filter, (c) the Cold 2 filter, (d) the Warm 1 filter, and (e) the Warm 2 filter (the contrast of the images has been enhanced to get a better view of details).
Fig. 6.
Fig. 6. Trajectory of the rice grain mounted on the motor rotation in top view.
Fig. 7.
Fig. 7. Apparatus used for projecting rice in the scene.
Fig. 8.
Fig. 8. Projection of the trajectory of rice grains in (a) the ${{XY}}$ plane “width and height,” in (b) the ${{XZ}}$ plane “width and depth,” and in (c) the ${{YZ}}$ plane “height and depth.” (d) 2D image obtained from a single path (the image contrast has been enhanced to get a better view of details).
Fig. 9.
Fig. 9. Trajectory of rice grains as a function of time giving their speeds and their accelerations in the 3D space of the scene in a chronological manner.

Tables (3)

Tables Icon

Table 1. Comparison of Color Ratios without and with Warm Colors

Tables Icon

Table 2. Behavior of the Average and the Standard Deviation of the Difference of Distances Estimated from the Warm Color Ratios According to Each Cold Color Wavelength

Tables Icon

Table 3. Estimation of the Rice Grain Speed from the Coordinates of the Different Positions Obtained Every 18 ms

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

g att _ H ( d ) = u = 1 N λ H { A λ H u exp ( d α λ H u ) k optic / λ H u ( R λ H u π ) δ ( d [ d r o ( u 1 ) + d r + Δ d 2 ] ) } ,
g att _ H ( d ) = u = 1 N λ H { A λ H u exp ( d α λ H u ) k optic / λ H u ( R λ H u π ) F T 1 [ exp ( 2 π j ν [ d r o ( u 1 ) + d r + Δ d 2 ] ) ] } .
g att _ C ( d ) = w = 1 N λ C { A λ C w exp ( 2 d α λ C w ) d 2 k optic / λ C w R λ C w ( Δ d 2 c 2 ) k cam F T 1 [ s i n c ( π ν Δ d 2 ) s i n c ( π ν Δ d k cam 2 ) n = 0 N λ C w exp ( ( 1 + k c a m 4 ) 2 π j ν [ Δ d ( 1 + k  cam 4 ) + d r + n c 2 F acqu ] ) ] } .
g att ( d ) = g att _ H ( d ) + g att _ C ( d ) .
d = 0.313 ρ 3 + 23.43 ρ 2 187.8 ρ + 776.6 ρ 3 + 8.352 ρ 2 89.28 ρ + 509.7 .
γ g r a i n _ 1 = d T 3 T 2 d T 2 T 1 Δ t 2 = 19.16 m / s 2 ,
γ g r a i n _ 2 = d T 4 T 3 d T 3 T 2 Δ t 2 = 16.37 m / s 2 ,
V i n i t _ g r a i n _ 1 = d T 2 T 1 1 2 γ g r a i n _ 1 Δ t 2 Δ t ,
V i n i t _ g r a i n _ 2 = d T 3 T 2 1 2 γ g r a i n _ 2 Δ t 2 Δ t .
V 3 _ g r a i n _ n = γ g r a i n _ n Δ t + V 2 _ g r a i n _ n .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.