## Abstract

We report dual-view band-limited illumination profilometry (BLIP) with temporally interlaced acquisition (TIA) for high-speed, three-dimensional (3D) imaging. Band-limited illumination based on a digital micromirror device enables sinusoidal fringe projection at up to 4.8 kHz. The fringe patterns are captured alternately by two high-speed cameras. A new algorithm, which robustly matches pixels in acquired images, recovers the object’s 3D shape. The resultant TIA–BLIP system enables 3D imaging over 1000 frames per second on a field of view (FOV) of up to 180 mm × 130 mm (corresponding to $1180\times 860\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$) in captured images. We demonstrated TIA–BLIP’s performance by imaging various static and fast-moving 3D objects. TIA–BLIP was applied to imaging glass vibration induced by sound and glass breakage by a hammer. Compared to existing methods in multiview phase-shifting fringe projection profilometry, TIA–BLIP eliminates information redundancy in data acquisition, which improves the 3D imaging speed and the FOV. We envision TIA–BLIP to be broadly implemented in diverse scientific studies and industrial applications.

© 2020 Chinese Laser Press

## 1. INTRODUCTION

Three-dimensional (3D) surface imaging has been extensively applied in numerous fields in industry, entertainment, and biomedicine [1,2]. Among existing methods, structured-light profilometry has gained increasing popularity in measuring dynamic 3D objects because of its high measurement accuracy and high imaging speeds [3–6]. As the most widely used method in structured-light profilometry, phase-shifting fringe projection profilometry (PSFPP) uses a set of sinusoidal fringe patterns as the basis for coordinate encoding. In contrast to other approaches of structured light, such as binary pattern projection [7], the pixel-level information carried by the phase of the fringe patterns is insensitive to variations in reflectivity across an object’s surface, which enables high accuracy in 3D measurements [8]. The sinusoidal fringes employed in PSFPP are commonly generated using digital micromirror devices (DMDs). Each micromirror on the DMD can be independently tilted to either $+12\xb0$ or $-12\xb0$ from its surface normal to generate binary patterns at up to tens of kilohertz. Despite being a binary amplitude spatial light modulator [9], the DMD can be used to generate grayscale fringe patterns at high speeds [10–14]. The conventional dithering method controls the average reflectance rate of each micromirror to form a grayscale image. However, the projection rate of fringe patterns is clamped at hundreds of hertz. To improve the projection speed, binary defocusing techniques [13] have been developed to produce a quasi-sinusoidal pattern by slightly defocusing a single binary DMD pattern. Nonetheless, the image is generated at a plane unconjugate to the DMD, which compromises the depth-sensing range and is less convenient to operate with fringe patterns of different frequencies. Recently, these limitations are lifted by the development of band-limited illumination [14], which controls the system bandwidth by placing a pinhole low-pass filter at the Fourier plane of a $4f$ imaging system. Both the binary defocusing method and the band-limited illumination scheme allow the generation of one grayscale sinusoidal fringe pattern from a single binary DMD pattern. Thus, the fringe projection speed matches the DMD’s refresh rate.

High-speed image acquisition is also indispensable to DMD-based PSFPP. In the standard phase-shifting methods, extra calibration patterns must be used to avoid phase ambiguity [15], which reduces the overall 3D imaging speed [16]. A solution to this problem is to place multiple cameras at both sides of the projector to simultaneously capture the full sequence of fringe patterns [17–20]. These multiview approaches bring in enriched observation of 3D objects in data acquisition. Pixel matching between different views is achieved with various assistance, including epipolar line rectification [17], measurement-volume-dependent geometry [18], and wrapped phase monotonicity [19]. Using these methods, the object’s 3D surface information can be directly retrieved from the wrapped phase maps [5]. Consequently, the necessity of calibration patterns is eliminated in data acquisition and phase unwrapping. This advancement, along with the incessantly increasing imaging speeds of cameras [21–24], has endowed multiview PSFPP systems with image acquisition rates that keep up with the DMD’s refresh rates.

Despite these advantages, existing multiview PSFPP systems have two main limitations. First, each camera captures the full sequence of fringe patterns. This requirement imposes redundancy in data acquisition, which ultimately clamps the systems’ imaging speeds. Given the finite readout rates of camera sensors, a sacrifice of the field of view (FOV) is inevitable for higher imaging speeds. Although advanced signal processing approaches such as image interpolation [25] and compressed sensing [26] have been applied to mitigate this trade-off, they usually are accompanied by high computational complexity and reduced image quality [27]. Second, the cameras are mostly placed on different sides of the projector. This arrangement could induce a large intensity difference from the directional scattering light and the shadow effect from the occlusion by local surface features, both of which reduce the reconstruction accuracy and pose challenges in imaging non-Lambertian surfaces [20].

To overcome these limitations, we have developed dual-view band-limited illumination profilometry (BLIP) with temporally interlaced acquisition (TIA). A new algorithm is developed for coordinate-based 3D point matching from different views. Implemented with two cameras, TIA allows each to capture half of the sequence of the phase-shifted patterns, reducing the data transfer load of each camera by 50%. This freed capacity is used either to transfer data from more pixels on each camera’s sensor or to support using higher frame rates of both cameras. In addition, the two cameras are placed as close as possible on the same side of the projector, which largely mitigates the intensity difference and shadow effects. Leveraging these advantages, TIA–BLIP has enabled high-speed 3D imaging of glass vibration induced by sound and glass breakage by a hammer.

## 2. METHOD

#### A. Setup

The schematic of the TIA–BLIP system is shown in Fig. 1(a). A 200 mW continuous-wave laser (wavelength $\lambda =671\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{nm}$, MRL-III-671, CNI Lasers) is used as the light source. After expansion and collimation, the laser beam is directed to a 0.45” DMD (AJD-4500, Ajile Light Industries) at an incident angle of $\sim 24\xb0$ to its surface normal. Four phase-shifting binary patterns, generated by an error diffusion algorithm [28] from their corresponding grayscale sinusoidal patterns, are loaded onto the DMD. A band-limited $4f$ imaging system that consists of two lenses [Lens 1 and Lens 2 in Fig. 1(a)] and one pinhole converts these binary patterns to grayscale fringes at the intermediate image plane. The two lenses have focal lengths of ${f}_{1}=120\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}$ and ${f}_{2}=175\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}$. The pinhole works as a low-pass filter. Its diameter, determined by the system bandwidth, is calculated as

where ${p}_{\mathrm{f}}=324\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu m}$ denotes the fringe period composed of 30 DMD pixels. Thus, the required pinhole diameter is $D=248.52\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu m}$. In the experiment, a 300 μm diameter pinhole is used to ensure all spatial frequency content of the sinusoidal fringe pattern passing through the system. Then, a camera lens (AF-P DX NIKKOR, Nikon, 18–55 mm focal length) projects these fringe patterns onto a 3D object. The deformed structure images are captured alternately by two high-speed CMOS cameras (CP70-1HS-M-1900, Optronis) with camera lenses (AZURE-3520MX5M, AZURE Photonics, 35 mm focal length) placed side by side. The distance between these two cameras is $\sim 12\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{cm}$. The difference in their viewing angles to the 3D object is $\sim 10\xb0$. Depending on their roles in image reconstruction, they are denoted as the main camera and the auxiliary camera. Synchronized by the DMD’s trigger signal, each camera captures half of the sequence [Fig. 1(b)]. The acquired images from each camera are transferred to a computer via a CoaXPress cable connected to a frame grabber (Cyton-CXP, Bitflow).#### B. System Calibration

To recover 3D information from the mutually incomplete images provided by the interlaced acquisition, TIA–BLIP relies on a coordinate-based understanding of the spatial relationship of the projector and both cameras in image formation. In particular, a “pinhole” model [29],

Based on the pinhole model, both cameras and the projector can be calibrated to determine the values of these parameters. Using a checkerboard as the calibration object, we adopted the established calibration procedure and software toolbox [30]. Since the direct image acquisition is not possible for a projector, the phase-based mapping method [29] was used to synthesize projector-centered images of the calibration object. These images were subsequently sent to the toolbox with calibration proceeding in the same manner as for the cameras.

#### C. Coordinate-Based 3D Point Determination

In the context of the pinhole model of Eq. (2), a coordinate-based method is used to recover 3D information from a calibrated imaging system. Two independent coordinates correspond to a point on a 3D object with the coordinates $(x,y,z)$: $(u,v)$ for the camera and $({u}^{\prime \prime},{v}^{\prime \prime})$ for the projector. In a calibrated PSFPP system, any three of these coordinates [i.e., $(u,v,{u}^{\prime \prime},{v}^{\prime \prime})$] can be determined. Then a linear system of the form $E=M{[x,y,z]}^{T}$ is derived. The elements of $E$ and $M$ are found by using each device’s calibration parameters as well as by using the scalar factors and the three determined coordinates [31]. In this way, the 3D information of an object point can be extracted via matrix inversion.

This analysis can be adapted to dual-view TIA–BLIP. First, images from a selected calibrated camera are used to provide the coordinates $(u,v)$ of a point on a 3D object. Along with the system’s calibration parameters, an epipolar line is determined on the other camera. The horizontal coordinate in the images of this camera is recovered using search-based algorithms along this epipolar line—a procedure commonly referred to as stereo vision. Second, by substituting a calibrated projector for the secondary camera, structured light methods use the intensity values of the pixel $(u,v)$ across a sequence of images to recover information about a coordinate of the projector. By incorporating both aspects, 3D information can be extracted pixel by pixel based on interlaced image acquisition.

### 1. Data Acquisition

In data acquisition, four fringe patterns, whose phases are equally shifted by $\pi /2$, illuminate a 3D object. The intensity value for the pixel $(u,v)$ in the $k$th acquired image, ${I}_{k}(u,v)$, is expressed as

Equation (3) allows the analysis of two types of intensity-matching conditions for the order of pattern projection shown in Fig. 1(b). The coordinates of a selected pixel in the images of the main camera are denoted by $({u}_{\mathrm{m}},{v}_{\mathrm{m}})$. For a pixel $({u}_{\mathrm{a}}^{\prime},{v}_{\mathrm{a}}^{\prime})$ in the images of the auxiliary camera that perfectly corresponds with $({u}_{\mathrm{m}},{v}_{\mathrm{m}})$, Eq. (3) allows us to write

### 2. Image Reconstruction

We developed a four-step algorithm to recover the 3D image of the object pixel by pixel. In brief, for a selected pixel $({u}_{\mathrm{m}},{v}_{\mathrm{m}})$ of the main camera, the algorithm locates a matching point $({u}_{\mathrm{a}}^{\prime},{v}_{\mathrm{a}}^{\prime})$ in the images of the auxiliary camera. From knowledge of the camera calibration, this point then enables the determination of estimated 3D coordinates as well as the ability to recover a wrapped phase. Using knowledge of the projector calibration, this phase value is used to calculate a horizontal coordinate on the projector’s plane. A final 3D point is then recovered using the coordinate-based method. A flowchart of this algorithm is provided in Fig. 2(a).

In the first step, $({I}_{0}+{I}_{1})/2$ and $({I}_{2}+{I}_{3})/2$ are calculated. Then, a threshold intensity, calculated from a selected background region, is used to eliminate pixels with low intensities. The thresholding results in a binary quality map [see Step I in Fig. 2(a)]. Subsequently, only pixels that fall within the quality map of the main camera are considered for 3D information recovery.

In the second step, the selected pixel $({u}_{\mathrm{m}},{v}_{\mathrm{m}})$ determines an epipolar line containing the matching point within the auxiliary camera’s images. Then, the algorithm extracts the candidates $({u}_{\mathrm{a}i}^{\prime},{v}_{\mathrm{a}i}^{\prime})$ that satisfy the intensity-matching condition [i.e., Eq. (5); illustrative data shown in Fig. 2(b)] in addition to three constraints [see Step II in Fig. 2(a)]. The subscript “$i$” denotes the $i$th candidate. As displayed in the illustrative data in Fig. 2(c), the first constraint requires candidates to fall within the quality map of the auxiliary camera. The second constraint requires that candidates occur within a segment of the epipolar line determined by a fixed transformation that approximates the location of the matching point. This approximation is provided by a 2D projective transformation (or homography) that determines the estimated corresponding point $({u}_{\mathrm{e}}^{\prime},{v}_{\mathrm{e}}^{\prime})$ by [32]

The third constraint requires the selected point and candidates to have the same sign of their wrapped phases [Fig. 2(d)]. Estimates of the wrapped phases are obtained using the technique of Fourier transform profilometry [5]. In particular, by bandpass filtering the left side of Eq. (5), i.e., ${I}_{0}-{I}_{1}$, the intensity of pixel $({u}_{\mathrm{m}},{v}_{\mathrm{m}})$ in the filtered image is

In the third step, three criteria are used to calculate penalty scores for each candidate, as shown in Step III in Fig. 2(a). The scheme is shown in Fig. 2(e). The first and primary criterion compares the phase values of the candidates using two methods. First, the phase inferred from the intensities of candidates and the pixel $({u}_{\mathrm{m}},{v}_{\mathrm{m}})$ is calculated by

To improve the robustness of the algorithm, two additional criteria are implemented using data available from the second step. ${B}_{i}$ is a normalized distance score favoring candidates located closer to the estimated matching point $({u}_{\mathrm{e}}^{\prime},{v}_{\mathrm{e}}^{\prime})$, which is calculated by

Moreover, ${C}_{i}$ is a normalized difference of the wrapped phase values using the wrapped phases ${\omega}_{\mathrm{m}}$ and ${\omega}_{\mathrm{a}i}^{\prime}$, which is calculated by A total penalty score ${S}_{i}$ for each candidate is then computed as a weighted linear combination of three individual scores, where the normalized weights $[{\eta}_{1},{\eta}_{2},{\eta}_{3}]=[0.73,0.09,0.18]$ are empirically chosen to lead to the results that are most consistent with the physical reality. Finally, the candidate with the minimum ${S}_{i}$ is chosen as the matching point $({u}_{\mathrm{a}}^{\prime},{v}_{\mathrm{a}}^{\prime})$. Its phase values, calculated by using Eqs. (10) and (11), are denoted as ${\phi}_{\mathrm{a}}^{\prime}$ and ${\phi}_{\mathrm{p}}^{\prime \prime}$, respectively.In the final step, the algorithm determines the final 3D coordinates [see Step IV in Fig. 2(a) and the scheme in Fig. 2(e)]. First, ${\phi}_{\mathrm{a}}^{\prime}$ is unwrapped as ${\phi}_{\mathrm{a}}^{\prime}+2\pi q$, where $q$ is an integer making ${\phi}_{\mathrm{p}}^{\prime \prime}-({\phi}_{\mathrm{a}}^{\prime}+2\pi q)\in (-\pi ,\pi ]$. Then, the coordinate on the projector’s plane, ${u}_{\mathrm{p}}^{\prime \prime}$, is recovered with subpixel resolution as

## 3. RESULTS

#### A. Quantification of Depth Resolution

To quantify the depth resolution of TIA–BLIP with different exposure times, we imaged two stacked planar surfaces offset by $\sim 9\xb0$ (Fig. 3). Reconstructed results at four representative exposure times (denoted as ${t}_{\mathrm{e}}$) are shown in Fig. 3(a). One area on each surface [marked as white solid boxes in Fig. 3(a)] was selected in the reconstructed image. The depth information on the $x$ axis was calculated by averaging the depth values along the $y$ axis. The difference in depths between these two surfaces is denoted by ${z}_{\mathrm{d}}$. In addition, the noise is defined as the averaged value of the standard deviations in depth from both surfaces. The depth resolution is defined as when ${z}_{\mathrm{d}}$ equals twice the system’s noise level. As shown in the four plots in Fig. 3(a), the reconstruction results deteriorate with shorter exposure time, manifested by increased noise levels and more points incapable of retrieving 3D information. As a result, the depth resolution degrades from 0.06 mm at ${t}_{\mathrm{e}}=950\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu s}$ to 0.45 mm at ${t}_{\mathrm{e}}=150\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu s}$ [Fig. 3(b)]. At ${t}_{\mathrm{e}}=100\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu s}$, TIA–BLIP fails in 3D measurements. The region of unsuccessful reconstruction prevails across most of the planar surfaces. The noise dominates the calculated depth difference, which is attributed to the low signal-to-noise ratio in the captured images.

#### B. Imaging of Static 3D Objects

To examine the feasibility of TIA–BLIP, we imaged various static 3D objects. First, two sets of 3D distributed letter toys that composed the words “LACI” and “INRS” were imaged. Shown in Fig. 4(a), the two perspective views of the reconstructed results reveal the 3D position of each letter toy. The detailed surface structures are illustrated by the selected depth profiles [see the white dashed lines and the magenta dashed boxes in Fig. 4(a)]. We also conducted a proof-of-concept experiment on three cube toys with fine structures (with a depth of $\sim 4\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}$) on the surfaces. Depicted in Fig. 4(b), the detailed structural information of these cube toys is recovered by TIA–BLIP.

#### C. Imaging of Dynamic 3D Objects

To verify high-speed 3D surface profilometry, we used TIA–BLIP to image two dynamic scenes: a moving hand and three bouncing balls. The fringe patterns were projected at 4 kHz. The exposure times of both cameras were ${t}_{\mathrm{e}}=250\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu s}$. Under these experimental conditions, TIA–BLIP had a 3D imaging speed of 1 thousand frames per second (kfps), an FOV of 180 mm × 130 mm (corresponding to $1180\times 860\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$) in captured images, and a depth resolution of 0.24 mm. Figure 5(a) shows the reconstructed 3D images of the moving hand at five time points from 0 ms to 60 ms with a time interval of 15 ms (see the full evolution in Visualization 1). TIA–BLIP’s high-speed 3D imaging allowed tracking the movements of four fingertips. Shown in Fig. 5(b), all the four fingers have apparent movement in both the $x$ axis and the $z$ axis, but stay relatively stationary in the $y$ axis, which agrees with the experimental condition.

In the second experiment, three white balls, each of which was marked by a different letter on its surface, bounced in an inclined transparent container. Figure 5(c) shows five representative reconstructed images from 8 ms to 28 ms with a time interval of 5 ms. The changes of the letter “C” on ${\mathrm{B}}_{1}$ and the letter “L” on ${\mathrm{B}}_{2}$ [marked in the third panel of Fig. 5(c)] clearly show the rotation of the two balls (see the full evolution in Visualization 2). TIA–BLIP enabled tracking the 3D centroids of each ball over time. Shown in Fig. 5(d), ${\mathrm{B}}_{1}$ collides with ${\mathrm{B}}_{2}$ at 16 ms, resulting in a sudden change in the moving directions. This collision temporarily interrupted the free fall of ${\mathrm{B}}_{1}$, represented by the two turning points in the curve of evolution along the $y$ axis [see the second panel of Fig. 5(d)]. The collision also changed the moving direction of ${\mathrm{B}}_{2}$, making it touch the base at 27 ms and then bounce up. In this scene, ${\mathrm{B}}_{3}$ maintained its movement in a single direction in both the $x$ axis and the $z$ axis. It fell onto the base and bounced back at 16 ms, resulting in a turning point in its $y\text{-}t$ curve. Because of the inclined bottom plane, the $y$ value of ${\mathrm{B}}_{3}$ at 16 ms was smaller than that of ${\mathrm{B}}_{2}$ at 27 ms.

Under the same experimental settings and pattern sequence choice, TIA–BLIP surpasses the existing PSFPP techniques in pixel counts and hence the imaging FOV. At the 1 kfps 3D imaging speed, the systems of standard single-camera PSFPP [14] and multiview PSFPP [19] would restrict their imaging FOV to $512\times 512\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$ and $768\times 640\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$, respectively [33]. In contrast, TIA, with a frame size of $1180\times 860$ pixels, increases the FOV by 3.87 and 2.07 times, respectively.

#### D. Imaging of Sound-Induced Vibration on Glass

To highlight the broad utility of TIA–BLIP, we imaged sound-induced vibration on glass. In this experiment [Fig. 6(a)], a glass cup was fixed on a table. The glass’s surface was painted white. A function generator drove a speaker to produce single-frequency sound signals (from 450 Hz to 550 Hz with a step of 10 Hz) through a sound channel placed close to the cup’s wall. To image the vibration dynamics, fringe patterns were projected at 4.8 kHz. The cameras had an exposure time of ${t}_{\mathrm{e}}=205\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu s}$. This configuration enabled a 3D imaging speed of 1.2 kfps, an FOV of 146 mm × 130 mm (corresponding to $960\times 860\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$) in captured images, and a depth resolution of 0.31 mm. Figure 6(b) shows four representative 3D images of the instantaneous shapes of the glass cup driven by the 500 Hz sound signal (the full sequence is shown in Visualization 3), showing the dynamic of structural deformation of the glass cup. The evolution of depth changes was analyzed using five selected points [marked by ${P}_{\mathrm{A}}$ to ${P}_{\mathrm{E}}$ in the first panel of Fig. 6(b)]. Shown in Fig. 6(c), the depth changes of five points are in accordance, which is attributed to the rigidness of the glass.

We further analyzed time histories of averaged depth displacements under different sound frequencies. Figure 6(d) shows the results at the driving frequencies of 490 Hz, 500 Hz, and 510 Hz. Each result was fitted by a sinusoidal function with a frequency of 490.0 Hz, 499.4 Hz, and 508.6 Hz, respectively. These results show that the rigid glass cup vibrated in compliance with the driving frequency. Moreover, the amplitudes of fitted results $\mathrm{\Delta}{z}_{\mathrm{fit}}$ were used to determine the relationship between the depth displacement and the sound frequency [Fig. 6(e)]. We fitted this result by the Lorentz function, which determined the resonant frequency of this glass cup to 499.0 Hz.

It is worth noting that this phenomenon would be difficult to be captured by using previous methods given the same experimental settings and pattern sequence choice. With frame size of $960\times 860\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$, the maximum frame rate for the used cameras is 2.4 kfps, which transfers to a 3D imaging speed of 480 fps for single-camera-based PSFPP and 600 fps for multiview PSFPP. Neither provides sufficient imaging speed to visualize the glass vibration at the resonance frequency. In contrast, TIA improves the 3D imaging speed to 1.2 kfps, which is fully capable of sampling the glass vibration dynamics in the tested frequency range of 450–550 Hz.

#### E. Imaging of Glass Breakage

To further apply TIA–BLIP to recording nonrepeatable 3D dynamics, we imaged the process of glass breaking by a hammer (the full sequence is shown in Visualization 4). As displayed in Fig. 7(a), the growth of cracks and the burst of fragments with different shapes and sizes are clearly shown in the reconstructed 3D images. The time courses of velocities of four fragments [marked by ${F}_{\mathrm{A}}$ to ${F}_{\mathrm{D}}$ in Fig. 7(a)] are plotted in Fig. 7(b). The velocities in the $y$ axis are considerably small compared to the other two directions, which indicates the impact of the hammer force was exerted on the $x\text{-}z$ plane. The ${v}_{y}$ of fragments ${F}_{\mathrm{A}}$ and ${F}_{\mathrm{C}}$ shows that they moved upward until 13 ms and fell afterward. The ${v}_{y}$ of fragments ${F}_{\mathrm{B}}$ and ${F}_{\mathrm{D}}$ reveals that they fell onto the remaining base of the cup at 15 ms and kept sliding down on the surface. The data of ${v}_{z}$ illustrates that ${F}_{\mathrm{A}}$ and ${F}_{\mathrm{C}}$ moved closer to the cameras, which was directly driven by the hammer’s force. However, ${F}_{\mathrm{B}}$ and ${F}_{\mathrm{D}}$, which collided with other pieces, maintained their positive directions in ${v}_{z}$ to move away from the cameras. The corresponding accelerations are displayed in Fig. 7(c), which indicates the influence of both the main strike and the ensuing collision among different fragments. At 14 ms, the collision with other fragments, which applied an impact along the $+x$ direction, dominated the acceleration direction for all four tracked fragments. In contrast, at 15 ms, another collision produced an impact in the $-x$ direction, causing a sharp decrease in the acceleration for ${F}_{\mathrm{A}}$ and ${F}_{\mathrm{C}}$. In addition, the direction of acceleration for ${F}_{\mathrm{D}}$ along the $y$ axis changed several times, which is attributed in several collisions of ${F}_{\mathrm{D}}$ with the base of the glass cup while sliding down.

## 4. DISCUSSION AND CONCLUSIONS

We have developed TIA–BLIP with a kfps-level 3D imaging speed over an FOV of up to 180 mm × 130 mm (corresponding to $1180\times 860\text{\hspace{0.17em}}\mathrm{pixels}$) in captured images. This technique implements TIA in multiview 3D PSFPP systems, which allows each camera to capture half of the sequence of the phase-shifting fringes. Leveraging the characteristics indicated in the intensity-matching condition [i.e., Eq. (5)], the newly developed algorithm applies constraints in geometry and phase to find the matching pair of points in the main and auxiliary cameras and guides phase unwrapping to extract the depth information. TIA–BLIP has empowered the 3D visualization of glass vibration induced by sound and the glass breakage by a hammer.

TIA–BLIP possesses many advantages. First, TIA eliminates the redundant capture of fringe patterns in data acquisition. The roles of the main camera and the auxiliary camera are interchangeable. Despite being demonstrated only with high-speed cameras, TIA–BLIP is a universal imaging paradigm easily adaptable to other multiview PSFPP systems. Second, TIA reduces the workload for each camera employed in the multiview systems. The freed capacity is used to enhance the technical specifications in PSFPP. In particular, at a certain frame rate, more pixels on the sensors of the deployed cameras can be used, which increases the imaging FOV. Alternatively, if the FOV is fixed, TIA supports these cameras to have higher frame rates, which thus increases the 3D imaging speed. Both advantages shed light on implementing TIA–BLIP with an array of cameras to simultaneously accomplish high accuracy and high-speed 3D imaging over a larger FOV. Third, the two cameras deployed in the current TIA–BLIP system are placed side by side. Compared with the existing dual-view PSFPP systems that mostly place the cameras at different sides of the projector, the arrangement in TIA–BLIP circumvents the intensity difference induced by the directional scattering light from the 3D object and reduces the shadow effect by occlusion. Both merits support robust pixel matching in the image reconstruction algorithm to recover 3D information on non-Lambertian surfaces.

Future work will be carried out in the following aspects. First, we plan to further improve TIA–BLIP’s imaging speed and FOV in three ways: by separating the workload to an array of cameras, by implementing a faster DMD, and by using a more powerful laser. Moreover, we will implement depth-range estimation and online feedback to reduce the time in candidate discovery. Furthermore, parallel computing will be used to increase the speed of image reconstruction toward real-time operation [34]. Finally, to robustly image 3D objects with different sizes and with incoherent light sources, we will generate fringe patterns with adaptive periods by using a slit or a pinhole array as the spatial filter [35]. Automated size calculation [36] also will be integrated into the imaging processing software to facilitate the determination of the proper fringe period.

Besides technical improvements, we will continue to explore new applications of TIA–BLIP. For example, it could be integrated into structure illumination microscopy [37] and frequency-resolved multidimensional imaging [38]. TIA–BLIP could also be implemented in the study of the dynamic characterization of glass in its interaction with the external forces in nonrepeatable safety test analysis [39–41]. As another example, TIA–BLIP could trace and recognize the hand gesture in 3D space to provide information for human–computer interaction [42]. Furthermore, in robotics, TIA–BLIP could provide a dual-view 3D vision for object tracking and reaction guidance [43]. Finally, TIA–BLIP can function as an imaging accelerometer for vibration monitoring in rotating machinery [44] and for behavior quantification in biological science [45].

## Funding

Natural Sciences and Engineering Research Council of Canada (ALLRP-549833-20, ALLRP-551076-20, CRDPJ-532304-18, RGPAS-507845-2017, RGPIN-2017-05959); Canada Foundation for Innovation (37146); Fonds de recherche du Québec–Nature et technologies (2019-NC-252960); Fonds de Recherche du Québec–Santé (267406, 280229).

## Acknowledgment

The authors thank Xianglei Liu for experimental assistance.

## Disclosures

The authors declare no conflicts of interest.

## REFERENCES

**1. **X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: a review,” Opt. Lasers Eng. **48**, 191–204 (2010). [CrossRef]

**2. **P. Kilcullen, C. Jiang, T. Ozaki, and J. Liang, “Camera-free three-dimensional dual photography,” Opt. Express **28**, 29377–29389 (2020). [CrossRef]

**3. **S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. **48**, 133–140 (2010). [CrossRef]

**4. **S. Van der Jeught and J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. **87**, 18–31 (2016). [CrossRef]

**5. **S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: a review,” Opt. Lasers Eng. **107**, 28–37 (2018). [CrossRef]

**6. **J. Liang, “Punching holes in light: Recent progress in single-shot coded-aperture optical imaging,” Rep. Prog. Phys. (in press, 2020). [CrossRef]

**7. **I. Ishii, K. Yamamoto, K. Doi, and T. Tsuji, “High-speed 3D image acquisition using coded structured light projection,” in *IEEE/RSJ International Conference on Intelligent Robots and Systems* (IEEE, 2007), 925–930.

**8. **J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. **3**, 128–160 (2011). [CrossRef]

**9. **J. Liang, M. F. Becker, R. N. Kohn, and D. J. Heinzen, “Homogeneous one-dimensional optical lattice generation using a digital micromirror device-based high-precision beam shaper,” J. Micro/Nanolith. MEMS MOEMS **11**, 023002 (2012). [CrossRef]

**10. **L. J. Hornbeck, “Digital light processing for high-brightness high-resolution applications,” Proc. SPIE **3013**, 27–41 (1997). [CrossRef]

**11. **J. Liang, S.-Y. Wu, R. N. Kohn, M. F. Becker, and D. J. Heinzen, “Grayscale laser image formation using a programmable binary mask,” Opt. Eng. **51**, 108201 (2012). [CrossRef]

**12. **S. Lei and S. Zhang, “Flexible 3-D shape measurement using projector defocusing,” Opt. Lett. **34**, 3080–3082 (2009). [CrossRef]

**13. **B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. **54**, 236–246 (2014). [CrossRef]

**14. **C. Jiang, P. Kilcullen, X. Liu, J. Gribben, A. Boate, T. Ozaki, and J. Liang, “Real-time high-speed three-dimensional surface imaging using band-limited illumination profilometry with a CoaXPress interface,” Opt. Lett. **45**, 964–967 (2020). [CrossRef]

**15. **Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express **19**, 5149–5155 (2011). [CrossRef]

**16. **C. Zuo, Q. Chen, G. Gu, S. Feng, and F. Feng, “High-speed three-dimensional profilometry for multiple objects with complex shapes,” Opt. Express **20**, 19493–19510 (2012). [CrossRef]

**17. **D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in *International Symposium on Optomechatronic Technologies* (IEEE, 2010), pp. 1–5.

**18. **C. Bräuer-Burchardt, C. Munkelt, M. Heinze, P. Kühmstedt, and G. Notni, “Using geometric constraints to solve the point correspondence problem in fringe projection based 3D measuring systems,” in *International Conference on Image Analysis and Processing* (Springer, 2011), pp. 265–274.

**19. **Z. Li, K. Zhong, Y. F. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a full-resolution and high-speed 3D measurement framework for arbitrary shape dynamic objects,” Opt. Lett. **38**, 1389–1391 (2013). [CrossRef]

**20. **W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3D shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express **27**, 2411–2431 (2019). [CrossRef]

**21. **B. Li, P. Ou, and S. Zhang, “High-speed 3D shape measurement with fiber interference,” Proc. SPIE **9203**, 920310 (2014). [CrossRef]

**22. **N. L. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. **53**, 024105 (2014). [CrossRef]

**23. **C. Jiang, T. Bell, and S. Zhang, “High dynamic range real-time 3D shape measurement,” Opt. Express **24**, 7337–7346 (2016). [CrossRef]

**24. **J.-S. Hyun and S. Zhang, “Superfast 3D absolute shape measurement using five binary patterns,” Opt. Lasers Eng. **90**, 217–224 (2017). [CrossRef]

**25. **M. Unser, A. Aldroubi, and M. Eden, “Fast B-spline transforms for continuous image representation and interpolation,” IEEE Trans. Pattern Anal. Mach. Intell. **13**, 277–285 (1991). [CrossRef]

**26. **X. Liu, J. Liu, C. Jiang, F. Vetrone, and J. Liang, “Single-shot compressed optical-streaking ultra-high-speed photography,” Opt. Lett. **44**, 1387–1390 (2019). [CrossRef]

**27. **C. Lei, Y. Wu, A. C. Sankaranarayanan, S.-M. Chang, B. Guo, N. Sasaki, H. Kobayashi, C.-W. Sun, Y. Ozeki, and K. Goda, “GHz optical time-stretch microscopy by compressive sensing,” IEEE Photon. J. **9**, 7500207 (2017). [CrossRef]

**28. **J. Liang, R. N. Kohn Jr., M. F. Becker, and D. J. Heinzen, “1.5% root-mean-square flat-intensity laser beam formed using a binary-amplitude spatial light modulator,” Appl. Opt. **48**, 1955–1962 (2009). [CrossRef]

**29. **S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. **45**, 083601 (2006). [CrossRef]

**30. **J.-Y. Bouguet, “Camera calibration toolbox for MATLAB,” http://www.vision.caltech.edu/bouguetj/calib_doc/.

**31. **S. Zhang, D. Royer, and S.-T. Yau, “GPU-assisted high-resolution, real-time 3-D shape measurement,” Opt. Express **14**, 9120–9129 (2006). [CrossRef]

**32. **R. Hartley and A. Zisserman, *Multiple View Geometry in Computer Vision* (Cambridge University, 2003).

**33. **OPTRONIS GMBH, *User Manual CP70-1HS-M/C-1900* (2018), Vol. 4.

**34. **W. Gao and Q. Kemao, “Parallel computing in experimental mechanics and optical measurement: a review,” Opt. Lasers Eng. **50**, 608–617 (2012). [CrossRef]

**35. **C. Chang, J. Liang, D. Hei, M. F. Becker, K. Tang, Y. Feng, V. Yakimenko, C. Pellegrini, and J. Wu, “High-brightness X-ray free-electron laser with an optical undulator by pulse shaping,” Opt. Express **21**, 32013–32018 (2013). [CrossRef]

**36. **C. Hoppe, M. Klopschitz, M. Rumpler, A. Wendel, S. Kluckner, H. Bischof, and G. Reitmayr, “Online feedback for structure-from-motion image acquisition,” in *British Machine Vision Conference (BMVC)*, 2012), pp. 1–12.

**37. **J. Qian, M. Lei, D. Dan, B. Yao, X. Zhou, Y. Yang, S. Yan, J. Min, and X. Yu, “Full-color structured illumination optical sectioning microscopy,” Sci. Rep. **5**, 14513 (2015). [CrossRef]

**38. **K. Dorozynska, V. Kornienko, M. Aldén, and E. Kristensson, “A versatile, low-cost, snapshot multidimensional imaging approach based on structured light,” Opt. Express **28**, 9572–9586 (2020). [CrossRef]

**39. **A. Ramos, F. Pelayo, M. Lamela, A. F. Canteli, C. Huerta, and A. Acios, “Evaluation of damping properties of structural glass panes under impact loading,” in *COST Action TU0905 Mid-Term Conference on Structural Glass* (2013).

**40. **C. Bedon, M. Fasan, and C. Amadio, “Vibration analysis and dynamic characterization of structural glass elements with different restraints based on operational modal analysis,” Buildings **9**, 13 (2019). [CrossRef]

**41. **M. Haldimann, A. Luible, and M. Overend, *Structural Use of Glass* (IABSE, 2008), Vol. 10.

**42. **Y. Li, “Hand gesture recognition using Kinect,” in *IEEE International Conference on Computer Science and Automation Engineering* (IEEE, 2012), pp. 196–199.

**43. **S. Huang, K. Shinya, N. Bergström, Y. Yamakawa, T. Yamazaki, and M. Ishikawa, “Dynamic compensation robot with a new high-speed vision system for flexible manufacturing,” Int. J. Adv. Manuf. Technol. **95**, 4523–4533 (2018). [CrossRef]

**44. **R. B. Randall, “State of the art in monitoring rotating machinery-part 1,” Sound Vibr. **38**, 14–21 (2004).

**45. **T. A. Van Walsum, A. Perna, C. M. Bishop, C. P. Murn, P. M. Collins, R. P. Wilson, and L. G. Halsey, “Exploring the relationship between flapping behaviour and accelerometer signal during ascending flight, and a new approach to calibration,” Int. J. Avian Sci. **162**, 13–26 (2020). [CrossRef]