Abstract

Intravascular photoacoustic (IVPA) imaging is an image-based imaging modality for the assessment of atherosclerotic plaques. Successful application of IVPA for in vivo coronary arterial imaging requires one overcomes the challenge of motion artifacts associated with the cardiac cycle. We propose a method for correcting artifacts owing to cardiac motion, which are observed in sequential IVPA images acquired by the continuous pullback of the imaging catheter. This method groups raw photoacoustic signals into subsets corresponding to similar phases in the cardiac cycles. Thereafter, the sequential images are reconstructed, by representing the initial pressure distribution on the vascular cross-sections based on the clustered frames of signals by time reversal. Results of simulation data demonstrate the efficacy of this method in suppressing motion artifacts. Qualitative and quantitative evaluations of the method indicate an enhancement of the image quality. Comparison results reveal that this method is computationally efficient in motion correction compared with the image-based gating.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Photoacoustic tomography (PAT) is a rapidly developing imaging modality that allows structural and functional imaging of biological tissues with high penetration depth and high optical contrast [1]. Intravascular photoacoustic (IVPA) imaging is a typical catheter-based application of PAT. As a complementary tool to intravascular ultrasound (IVUS) for the assessment of atherosclerosis, it is capable of providing multi-scale anatomical, functional, and molecular information of vessels by combining noninvasive PAT with endoscopic detection. It enables potential applications in the diagnosis and interventional treatment of cardiovascular diseases [2,3].

Successful application of an IVPA system for in vivo intracoronary imaging is challenging because motion artifacts owing to voluntary motion and involuntary motion are inevitable. The voluntary motion such as body movement can be avoided by decreasing the image acquisition time and keeping the imaging object still during acquisition [4]. In whole-body PAT for small animals, the animal movement is significantly reduced through mechanical clamping or fixation [57]. This scheme is only applicable to specific scenarios because clamping is not suitable for all body parts. The involuntary motion is associated with heart beat, breathing, pulsating blood flow within the vascular lumen, and arterial vasomotion. It leads to an undesired motion between the ultrasonic transducer and vessel wall, which involves the transversal motion and longitudinal motion. The transversal motion produces the shift (translation and rotation) of the vessel structures from one slice to the other, while the longitudinal motion induces a proximal or distal displacement additional to the pullback. As a consequence, the vessel cross-sections are not equally spaced in the longitudinal direction along the pullback trajectory and the geometrical description of the vessel wall is misled. The motion artifacts are visible as saw-tooth shaped vessel wall boundaries in the volumetric images that are reconstructed from pullback recordings. The artifacts degrade image quality and subsequently make identification and interpretation of structures like vessels and atherosclerotic plaques difficult. Furthermore, the precision of quantitative assessment of tissue properties and 3-D vessel rendering is reduced.

As an imaging modality with a high frame rate, PAT enables rendering of 2-D or 3-D images via excitation of an entire volume with a single nanosecond laser pulse, which avoids motion artifacts in a single frame [8]. However, for studies of vessel morphology, plaque characterization, and other purposes requiring 3-D imagery, acquisition and analysis of multiple frames rendering multiple vascular cross-sections are required. Accordingly, motion correction is relevant in IVPA imaging of coronary arterial vessels.

Many efforts have been made to mitigate motion artifacts in PAT applications involving multi-frame data analysis. A simple and generally effective way to account for motions caused by heart beat or breathing is to gate the sequences according to an electrocardiogram (ECG) or respiratory triggering signal prospectively or retrospectively. On-line prospective gating activates data acquisition during the same phase of each cycle by employing a triggering or synchronization scheme. Off-line retrospective gating captures images or raw signals continuously in the cycles and records ECG or respiratory waveforms simultaneously. After acquisition, images or signals that are collected at the same temporal location (cardiac/respiratory phase) are selected according to the ECG or respiratory signals. This technique is essential in many established imaging systems that need to locate objects accurately [912]. For instance, image-based gating has been routinely applied to suppress motion artifacts present in continuous pullback IVUS and intravascular optical coherence tomography (IV-OCT) image sequences [1317]. One frame per cycle is extracted to form a subsequence by tracking the cyclic change of the image intensity or the vascular lumen contour across the entire pullback. In experiments of photoacoustic imaging (PAI), retrospective gating has been adopted to suppress respiratory motion artifacts in a whole-body PAT system for small animals [18,19]. It is implemented in two ways, hardware or software. The hardware respiratory gating is to collect raw PA signals when the imaging object is breathing freely, simultaneously monitoring the respiratory waveform with an external equipment. The collected PA signals are aligned according to the respiratory phases. Finally, the images in a complete respiratory cycle are reconstructed based on the PA measurements in the same respiratory phases [18]. This method needs breathing training for the imaging object because quick or uneven breathing results in the reduced accuracy of triggering. The software respiratory gating is to extract a signal that implies the respiratory phases from the PA measurements. The motion is subsequently corrected based on the signal prior to image reconstruction [19]. When the pause between two breaths exceeds the length of a single breath, it is necessary to establish a standard to distinguish between a dynamic and a static frame according to the prior knowledge regarding respiratory characteristics.

Frame motion compensation (FMC) and inter-frame motion compensation (IFMC) have been used to reduce motion artifacts in frame-averaged PA images [20]. Both methods determine motion vectors between frames by block matching and three-step search. FMC corrects the error from motion artifacts by comparing past images with the most recent image as the reference image. However, large motion might not be detected because the motion vector accumulating with the time is easy to exceed the detectable range. IFMC overcomes the limitation of FMC by comparing two consecutive frames. The accuracy of both methods highly depends on the search step that is difficult to be properly selected in an automatic manner.

In spectroscopic PAT of the heart, images are acquired under the multiple wavelengths excitation at separate time points. The cardiac motion during this process leads to blurring in the images. Motion clustering has been demonstrated effective in reducing such motion blurring [21]. A clustering algorithm such as k-means is utilized to separate a sequence of single-pulse images at multiple excitation wavelengths into clusters corresponding to different stages of the cardiac cycle. The number of clusters should be selected properly based on the severity of blurring, signal-to-noise ratio (SNR), and the performance of the clustering algorithm.

In addition, the model-based scheme has been demonstrated efficient in reconstructing PAT images with high quality besides the analytical inversion formulations such as back-projection (BP) [22]. A model physically describing the forward acoustic problem is established, which outputs the theoretical acoustic pressure induced by optical absorbers in tissues. The initial pressure or optical energy deposition is recovered by iteratively minimizing the error between the measured and theoretical acoustic pressure calculated by the forward model. The scheme is capable of including all linear effects in the forward model. By incorporating the motion into the forward model, estimation of the motion parameters and reconstruction of the desired images can be implemented simultaneously [23].

In this paper, we propose a novel method for suppressing motion artifacts associated with the cardiac cycle in IVPA pullback volumes. To the best of our knowledge this is the first paper focusing on motion artifact correction for volumetric IVPA images. The raw PA signals collected by the ultrasonic detector in successive vascular cross-sections are grouped into subsets that correspond to similar phases in the cardiac cycles by clustering. The sequential images representing the initial pressure distribution in vascular cross-sections are reconstructed based on the selected frames of signals by time reversal (TR). We tested our method on simulation data. According to the results, we analyzed the influence of the objective function threshold in the clustering algorithm on image reconstruction. In addition, we conducted experiments comparing our method with the image-based gating to demonstrate the superiority of our method in suppressing IVPA motion artifacts.

The remainder of this paper is organized as follows. Sect. 2 depicts the presented method in detail. Sect. 3 provides the demonstration and comparison results. Sect. 4 gives related discussions and Sect. 5 concludes the paper with a summary.

2. Method

2.1 Principle of IVPA imaging

As illustrated in Fig. 1, the procedure of IVPA imaging is analogous to IVUS, where a catheter is inserted into the vascular lumen for imaging and pushed to the distal end under the direction of X-ray angiography. During pullback of the catheter, the probe mounted on its tip emits short pulsed laser (∼ns) that is absorbed by the surrounding tissues, leading to a temperature rise. Subsequently, the thermo-elastic effect induces a pressure rise that is proportional to the optical energy deposition. The pressure rise propagates as wide-band (∼MHz) ultrasonic waves, that is, PA waves, to the tissue surface. The ultrasonic transducers on the probe scan the surrounding tissues circumferentially, collecting the photoacoustically generated pressure along a circular trajectory parallel to the imaging plane that is perpendicular to the catheter. The ultrasonic detector is idealized as a point-like detector by ignoring its aperture effect. A full-view (360°) scanning provides the measurements in M angles, {θ1,θ2,…,θM}, where N points are sampled on each scanning radius. Finally, images representing the spatially varying optical energy deposition or initial pressure in vascular cross-sections are reconstructed from the measured PA signals by acoustic inversion [24,25]. Moreover, the solution to optical inversion enables quantitative imaging by recovering optical properties (absorption coefficient μa and scattering coefficient μs), thermoelastic coefficient (Grüneisen coefficient), and functional properties including blood oxygenation (sO2) and the concentration of chromophores from optical energy deposition or PA measurements [26,27].

 figure: Fig. 1.

Fig. 1. Schematic diagram of IVPA imaging. (a) Longitudinal view (L-view) of a vessel segment to be imaged; (b) Transversal view of an imaging plane; (c) Stacked transversal images in a temporal order; (d) Generation of cardiac cycle-dependent motion artifacts in a time-axis view of pullback volumes.

Download Full Size | PPT Slide | PDF

During acquisition of PA signals in successive cross-sections, the catheter remains in the center of the transversal imaging plane as sketched in Fig. 1(d). The cardiac cycle-dependent motion causes misalignment of successive slices along the sequence of cross-sectional images, that is, B-scan images in Cartesian coordinates as well as saw-tooth shaped vessel wall boundaries in time-axis views of IVPA pullback volumes. Moreover, the variations of the luminal shape in the vertical direction are prominent than in the horizontal direction.

2.2 Motion suppression by signal clustering

Suppose that a single pullback produces W slices to be assessed. Each slice is discretized into M×N sampling locations. The PA signals collected in the kth slice are recorded in a M×N-dimensional matrix,

$${{\boldsymbol P}_k} = \left[ {\begin{array}{cccc} {{{\boldsymbol p}_{11}}}&{{{\boldsymbol p}_{12}}}& \cdots &{{{\boldsymbol p}_{1N}}}\\ {{{\boldsymbol p}_{21}}}&{{{\boldsymbol p}_{22}}}& \cdots &{{{\boldsymbol p}_{2N}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{{\boldsymbol p}_{M1}}}&{{{\boldsymbol p}_{M2}}}& \cdots &{{{\boldsymbol p}_{MN}}} \end{array}} \right], $$
where k = 1,2,…,W, pij denotes a pressure vector collected at the jth location in the ith measuring angle, i = 1,2,…,M, and j = 1,2,…,N. A matrix P constitutes a signal frame and a single pullback produces W frames in total. The correlation matrix of these signal frames is obtained by
$${\boldsymbol C} = \left[ {\begin{array}{cccc} {{\rho_{11}}}&{{\rho_{12}}}& \cdots &{{\rho_{1W}}}\\ {{\rho_{21}}}&{{\rho_{22}}}& \cdots &{{\rho_{2W}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{\rho_{W1}}}&{{\rho_{W2}}}& \cdots &{{\rho_{WW}}} \end{array}} \right], $$
where
$${\rho _{ij}} = \frac{{\textrm{|Cov}({{\boldsymbol P}_i},{{\boldsymbol P}_j})|}}{{\sqrt {D({{\boldsymbol P}_i})} \cdot \sqrt {D({{\boldsymbol P}_j})} }}. $$

Here, i and j range from 1 to W, Pi and Pj denote, respectively, the ith and jth frame, ρij is the correlation coefficient between Pi and Pj, D(Pi) and D(Pj) are the variances of Pi and Pj, and Cov(Pi, Pj) is the covariance of Pi and Pj. The correlation coefficients in C are rearranged into a 1-D array row by row, that is, {ρ11, ρ12,…, ρ1W, ρ21, ρ22, …, ρ2W, …, ρW1, ρW2, …, ρWW}. For simplicity, it is denoted as a data set, F = {f1, f2, f3, …, fQ}, where Q = W×W.

The entries in F are grouped into subsets corresponding to the similar phases in a cardiac cycle by affinity propagation (AP) clustering [28,29]. The detailed steps are as follows.

Step 1. Initialization.

The iteration times is initialized as t = 0. The responsibility matrix Rt = [rt(i, k)]Q×Q and availability matrix At = [at(i, k)]Q×Q are initialized as zero matrices, where r0(i, k) = a0(i, k) = 0. rt(i, k) denotes the responsibility sent from fi to candidate cluster center fk at the tth iteration, reflecting the accumulated evidence for how well-suited fk is to serve as the cluster center for fi. at(i, k) denotes the availability sent from candidate cluster center fk to fi at the tth iteration, reflecting the accumulated evidence for how appropriate it would be for fi to choose fk as its cluster center [28].

Step 2. Calculation of the similarity matrix.

The similarity st(i, k) is determined by

$${s_t}(i,k) ={-} {|{{f_i} - {f_k}} |^2}, $$
which indicates how well fk is suited to be the cluster center for fi. The similarity matrix St at the tth iteration is a collection of real-valued similarities among all data points in F.

Step 3. Updating of responsibility matrix and availability matrix.

The responsibility matrix and availability matrix are updated as

$${r_{t + 1}}(i,k) = \left\{ {\begin{array}{ll} {{s_t}(i,k) - \mathop {\max }\limits_{_{k^{\prime}\textrm{s}\textrm{.t}\textrm{.}k^{\prime} \ne k}} \{{{a_t}(i,k^{\prime}) + {r_t}(i,k^{\prime})} \}},&i \ne k\\ {{s_t}(i,k) - \mathop {\max }\limits_{_{k^{\prime}\textrm{s}\textrm{.t}\textrm{.}k^{\prime} \ne k}} \{{{s_t}(i,k^{\prime})} \}},&i = k \end{array}} \right.$$
and
$${a_{t + 1}}(i,k) = \left\{ {\begin{array}{ll} {\min \left\{ {0,{r_{t + 1}}(k,k) + \sum\limits_{i^{\prime}\textrm{s}\textrm{.t}\textrm{.}i^{\prime} \notin \{ i,k\} } {\max ({0,{r_{t + 1}}(i^{\prime},k)} )} } \right\}},&i \ne k\\ {\sum\limits_{i^{\prime}\textrm{s}\textrm{.t}\textrm{.}i^{\prime} \ne k} {\max } \{{0,{r_{t + 1}}(i^{\prime},k)} \}},&i = k \end{array}} \right., $$
where i’, k’, i, and k range from 1 to Q, and rt+1(i, k) and at+1(i, k) are, respectively, the responsibility and availability between fi and fk at the (t+1)th iteration.

Step 4. Attenuation of the responsibilities and availabilities.

To avoid numerical instability, the responsibilities and availabilities are attenuated by a damping factor,

$$\left\{ {\begin{array}{l} {{{\hat{r}}_{t + 1}}(i,k) = \lambda {r_t}(i,k) + (1 - \lambda ){r_{t + 1}}(i,k)}\\ {{{\hat{a}}_{t + 1}}(i,k) = \lambda {a_t}(i,k) + (1 - \lambda ){a_{t + 1}}(i,k)} \end{array}} \right., $$
where λ denotes the damping factor ranging in [0.5,1]. In this study, we set it as 0.5.

Step 5. Determination of clustering results.

The sum of the updated responsibility and availability is calculated,

$$e = {\hat{a}_{t + 1}}(i,k) + {\hat{r}_{t + 1}}(i,k). $$

When e reaches its maximum, fi is the cluster center of fk if i = k; otherwise, fk is the cluster center of fi.

Step 6. Determination of termination of the iteration.

Whether the iteration is terminated or not is determined by the following objective function,

$$J\textrm{ = }\sum\limits_{f \in {{\boldsymbol h}_i},i = 1}^H {|f - {g_i}{|^2}}, $$
where f denotes a sample in the cluster, hi is the cluster centered on gi, and H is the number of clusters. If J exceeds the threshold, the iteration is terminated and accordingly the clustering results are the output; otherwise, let tt+1 and return to Step 2.

After clustering, the frames of signals collected in the same cardiac phases are selected. Finally, the images representing the initial pressure distribution in the vascular cross-sections are reconstructed from these frames of signals with the TR reconstruction approach [30]. The complete process of the proposed method is illustrated in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flowchart of complete process of our method.

Download Full Size | PPT Slide | PDF

2.3 Performance evaluation

We validated our method on simulation data by considering the lack of continuous pullback volumetric data that was acquired on in vivo dynamic vessels owing to the limitation of experimental conditions and set-up. We implemented the method by programming using MATLAB (R2018a, The MathWorks, Inc., Natick, Massachusetts) on a laptop configured with a 2.5 GHz Intel Core i5-10300H CPU, 8 GB RAM, and Windows 10 64bits as the operating system.

2.3.1 Simulated image preparation

We constructed computer-generated phantoms that mimic coronary arterial vessels having different tissue types. Figure 3 shows representative examples of vascular cross-sections. Coronary arterial vessels follow the cyclic dynamics of the heart in the cardiac cycles, resulting in the periodical variations in the cross sectional area of the vascular lumen. For each phantom, we generated successive cross-sections along the long axis of the lumen corresponding to different time points in the cycles based on the periodic change of the luminal area. We assumed that the first frame in a pullback sequence is acquired at end-diastole at which moment the luminal area reaches its minimum. Thus, we determined the luminal area at a time-point n in the sequence by [31]

$$S(n) = \left\{ {\begin{array}{lr} {A\sin (\mathrm{\pi }Rn)\exp ( - \mathrm{\pi }\alpha n) + {S_0},}&{\textrm{0 < }n < 1/R} \\ {S(n - 1/R)},&{n \ge 1/R} \end{array}} \right., $$
where A is a constant controlling the luminal area, R is the heart rate in Hz (beats per second), α is a constant determining the time when the luminal area reaches its maximum, and S0 is the minimal luminal area at end-diastole. We obtained the vessel wall contour at time-point n+1 by expanding outward or contracting inward the contour at time-point n which is represented with discrete points ${{\boldsymbol V}_{i,n}} = ({l_{i,n}},{\theta _{i,n}})$. The polar coordinate of a point in the contour at time-point n+1, ${{\boldsymbol V}_{i,n + 1}} = ({l_{i,n + 1}},{\theta _{i,n + 1}})$, is determined by
$$\left\{ {\begin{array}{l} {{l_{i,n + 1}} = {l_{i,n}}\lambda \sqrt {S(n + 1)/S(n)} }\\ {{\theta_{i,n + 1}} = {\theta_{i,n}}} \end{array}} \right., $$
where l denotes the polar radius, θ denotes the polar angle, and λ is a proportional factor controlling the extent to which the contour expands or contracts. The value of λ depends on the tissue type. We set λ = 1 for the lumen, λ < 1 for the calcified plaques owing to their poorer elasticity than the lumen, and λ > 1 for the fibrosis-lipid plaques owing to their better elasticity than the lumen. Accordingly, the cross-sectional model at each time-point in the cycle is automatically generated from the one at end-diastole.

We simulated the sequential frames of spatially varying PA signals in vascular cross-sections by inputting the generated cross-sectional geometrical models into our previously developed endoscopic PAT simulation platform [31]. Table 1 provides the optical and acoustic property parameters of the vessel phantoms defined by referring to the histological findings [32,33]. The speed of sound and density of each tissue type follow Gaussian distributions based on the values shown in the table.

Tables Icon

Table 1. Parameters of optical and acoustic properties of vessel phantoms for forward IVPA simulation

Considering that motion artifacts associated with the cardiac cycle are not visually prominent in successive transversal images, we utilized time-axis views, that is, L-views, of the pullback volumes to facilitate analyzing motion artifacts. We obtained vertical and horizontal L-views as illustrated in Fig. 4.

2.3.2 Figures of Merit

We utilized the dissimilarity matrix (DM), average dissimilarity (AD), and average inter-frame dissimilarity (AIFD) as quantitative metrics to evaluate the quality of reconstructed image sequences. For a W-frame sequence {I1,I2,…,IW}, a W×W-dimensional DM, that is, DW×W= [di,j], is constructed by pairwise comparison of the images [17],

$${d_{i,j}} = 1 - \frac{{\sum\limits_{k = 1}^{Width} {\sum\limits_{l = 1}^{Height} {|{{I_i}({k,l} )- {\mu_i}} |\cdot |{{I_j}({k,l} )- {\mu_j}} |} } }}{{\sqrt {\sum\limits_{k = 1}^{Width} {\sum\limits_{l = 1}^{Height} {{{[{{I_i}({k,l} )- {\mu_i}} ]}^2}} \sum\limits_{k = 1}^{Width} {\sum\limits_{l = 1}^{Height} {{{[{{I_j}({k,l} )- {\mu_j}} ]}^2}} } } } }}, $$
where i and j range from 1 to W, Ii and Ij are, respectively, the ith and jth image frame of dimensions Width×Height, di,j is the dissimilarity between Ii and Ij, Ii(k,l) and Ij(k,l) are the gray-levels of the pixel (k, l) in Ii and Ij, respectively, and μi and μj are the average gray-levels of Ii and Ij. di,j ranges in the interval [0,1] and di,j= dj,i. A smaller element in the DM represents a frame pair that differ less in their appearances than more different frames.

AD and AIFD are, respectively, defined as [17]

$$D(k )= \frac{1}{{W - k}}\sum\limits_{m = 1}^{W - k} {{d_{m,m + k}}}$$
and
$$D = \frac{1}{W}\sum\limits_{i = 1}^W {\sum\limits_{j = 1}^W {{d_{i,j}}} }, $$
where k = 0,1,…,W‒1, D(k) denotes the AD between two frames with the interval of k frames, D(0) = 0, and dm,m+k is the dissimilarity between frames Im and Im+k. Lower AD and AIFD indicate higher similarity between frames.

3. Results

3.1 Results of image reconstruction

In demonstration experiments, we generated successive 300 cross-sections corresponding to different time-points in the cardiac cycles for each phantom. We set α = 0.5 and R = 1.2 Hz, thereby the length of a cardiac cycle was ${1 / R} \approx 0.83\textrm{ }\textrm{s}$. The frame rate was set as 24 fps by referring to [34], that is, 20 frames per cycle, therefore 300 frames covered about 15 cycles. Figure 5 shows the transversal images obtained by forward simulation for the four cross-sections shown in Fig. 3. Figure 6 shows examples of transversal images representing the initial pressure distribution reconstructed from the simulated PA signals by the conventional TR algorithm without motion correction. In the figure, the shifts between successive cross-sections, that is, translation and rotation of the structures from one image to the other, can be observed. Figure 7 shows the L-views of four 300-frame sequences reconstructed by the conventional TR (non-gated sequence) and our method (gated sequence), respectively. In our method, the objective function threshold of clustering was set as 0.3. In the figure, motion artifacts associated with the cardiac cycle are observed as the saw-tooth shaped appearance of the vessel wall. Moreover, the artifacts are more prominent in the vertical L-views than in the horizontal ones. Note that the L-views of the gated sequences exhibit a significant enhancement of the visualization with the smoothed vessel wall boundaries as opposed to the non-gated sequences. However, there is an apparent loss of resolution due to gating, which is a known trade-off of the gating process [11].

 figure: Fig. 3.

Fig. 3. Geometry of vascular cross-sections which are numbered as I, II, III, and IV from left to right.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Schematic diagram of L-views of IVPA pullback volumes.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Simulated transversal images of four vessel cross-sections shown in Fig. 3. (a) Images representing the normalized optical energy deposition; (b) Images representing the normalized PA signals reaching the detector.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Images randomly selected from the pullback sequences of transversal images representing the initial pressure distribution, which are reconstructed directly from the simulated PA signals by TR without motion suppression. (a) Phantom I; (b) Phantom II; (c) Phantom III; (d) Phantom IV.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. The gating results of the proposed method for four IVPA sequences. (a) Vertical L-views; (b) Horizontal L-views. The gated sequences are artificially stretched to match the same physical lengths as the non-gated ones in order to facilitate comparison due to the fact that there are much more images/mm present in the non-gated sequences, while the gated sequences have much fewer images due to the gating.

Download Full Size | PPT Slide | PDF

Figure 8, Fig. 9 and Table 2 provide the results of the evaluation metrics obtained from the non-gated and gated image sequences. In Fig. 8, the DMs are displayed as grayscale images which exhibit periodic structures. Obviously, the overall brightness of the visualization of DMs is reduced significantly, while both AD and AIFD are decreased after motion correction, suggesting reduced dissimilarity between frames in the gating subsets.

 figure: Fig. 8.

Fig. 8. Visualization of DMs obtained from the non-gated (left column) and gated (right column) image sequences. (a) Phantom I; (b) Phantom II; (c) Phantom III; (d) Phantom IV

Download Full Size | PPT Slide | PDF

 figure: Fig. 9.

Fig. 9. AD functions with respect to frame intervals obtained from the non-gated and gated image sequences for (a) phantom I, (b) II, (c) III, and (d) IV. There are 60, 65, 80, and 72 frames in four gated sequences, respectively.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. AIFDs of the image sequences before and after motion correction.

3.2 Influence of objective function threshold

The objective function threshold in Eq. (9) determines whether the iteration of clustering is terminated or not. We set it as 0.3, 0.6, 0.9, 1.2, 1.5, and 1.8 respectively while other conditions remained unchanged to investigate its influence on the performance of the proposed method. Table 3 provides the results of phantom I, revealing that a lower threshold leads to better results of motion suppression. However, a lower threshold leads to more iterations. Therefore, a trade-off between the time performance and the reconstruction accuracy should be made.

Tables Icon

Table 3. AIFDs of the image sequences for phantom I in the case of different thresholds of the objective function.

3.3 Results of comparison with the image-based gating

We compared the proposed method with an image-based gating approach that was developed for IVUS [17]. The results shown in Fig. 10 reveal that the motion artifacts in the image sequences processed by image-based gating are still prominent compared with those obtained by our method. This conclusion can also be drawn from the results of evaluation metrics provided in Fig. 11. In addition, we recorded the run-time of both methods as provided in Table 4. The run-time for the image-based gating includes the time cost in reconstructing the entire image sequence by TR besides the time cost in gating. The results indicate that our method is more computationally efficient than the image-based gating. This is because our method groups raw PA signals prior to image reconstruction, avoiding reconstruction and post-processing of those non-gating frames. In contrast, the image-based gating corrects motion artifacts via a post processing procedure over the complete pullback recording. It requires to reconstruct the successive images from the raw signals collected in each slice along the pullback trajectory. This procedure is computationally expensive compared with the off-line gating itself.

 figure: Fig. 10.

Fig. 10. L-views of the IVPA sequence of phantom I processed with our method and the image-based gating method. (a) Non-gated sequence; (b) Gated sequence with our method; (c) Gated sequence with the image-based method.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11.

Fig. 11. DMs and ADs of the reconstructed image sequences for phantom I. There are 60 and 55 frames in the gated sequences obtained by our method and image-based gating, respectively. (a) Visualization of the DMs obtained from non-gated sequence (left), gated sequence by our method (middle), and gated sequence by image-based gating (right); (b) AD with respect to frame interval.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. Quantitative metrics of the image sequences of phantom I processed by two methods.

4. Discussion

4.1 Superiority of our method to other motion correction methods

As summarized in Introduction, the existing strategies for motion correction in PAT involving multi-frame analysis include FMC, motion clustering, model-based reconstruction, and gating. FMC or IFMC aims to reduce motion artifacts in frame-averaged PA images and enhance the SNR of deep tissue imaging which utilizes the regulative energy of a commonly-used Nd:YAG laser. Motion clustering focuses on alleviating motion blurring which is observed in spectroscopic PA images acquired under the multiple wavelengths excitation at different time-points. Both strategies are applicable to the scenarios where multiple images are acquired for the same target with motion. It is infeasible to apply them to correct motion artifacts associated with the cardiac cycle in IVPA pullback volumetric images that are acquired at different locations along the catheter pullback path. The model-based motion correction is to solve the forward problem iteratively in nature, where the forward operator is calculated repeatedly. This procedure is computationally burdensome, which hinders its application in real-time, high-resolution, and large-volume imaging including IVPA.

Gating techniques have been commonly utilized in cardiac imaging to reduce motion artifacts owing to heart beat or breathing. However, prospective gating requires a special ECG or respiratory triggering device, causing the added complexity, long setup times, and prolonged acquisition procedure compared with the continuous non-gating acquisition. The accuracy of retrospective gating depends on the gating signals that are extracted from the raw measurements or images. The image-based gating relies on post processing of images, therefore it requires to reconstruct all images in a dynamic sequence prior to gating. A single IVPA pullback sequence that is acquired on a vessel segment of 10 mm long contains about 400 frames in the case of constant pullback (0.5 mm/sec) and a frame rate of 20 fps. The overload of image data reduces the efficiency of the image-based algorithm. Moreover, the inevitable loss of information regarding the vascular structures and properties in the procedure of image reconstruction is an important issue that should be taken into account. Our method is based on a retrospective gating scheme by selecting those frames of PA signals that are acquired in the same cardiac phase. It differs from the state-of-art image-based gating schemes in that the latter subtracts cardiac dynamics based on grayscale images themselves, whereas our method suppresses motion prior to image reconstruction avoiding reconstructing the non-gating frames.

4.2 Limitations of our method

Our method enables full suppression of motion artifacts associated with the cardiac cycle with a relatively low computation cost compared with the image-based gating. However, the loss of information and resolution is a known trade-off of the gating process. Only static sequences at certain cardiac phases are remained in the procedure of resampling signals or images, leading to the loss in the useful information regarding vascular structural and functional features. Moreover, all frames between systole and diastole are generally required to be analyzed for a continuous assessment of tissue elastic properties. Further study on the data integrity is desired in the future, so as to ensure the rendered image sequences contain complete information while enhancing the image quality.

The motion artifacts in in vivo sequential intravascular images are a combined result of various factors such as heartbeat motion, pulsating blood flow in the vascular lumen, arterial vasomotion, and catheter-based motion. The first three issues are related to the cardiac cycle. As a result, the trajectory of the catheter tip does not remain parallel to the lumen axis owing to the lateral movement of the catheter tip with respect to the lumen, leading to the shift between successive slices. In addition, an irregular deformation and pulsation of the vessel wall are related to heartbeat motion. Our method aims to suppress motion artifacts associated with the cardiac cycle. However, there remains a need to eliminate catheter-based motion artifacts, which is a common issue in catheter-based imaging systems, such as catheter bending, longitudinal oscillation of the catheter, and nonuniform rotation distortion (NURD). The longitudinal oscillation of the catheter within the vascular lumen results in the repeated sampling at a acquisition position. NURD is inherently present in intravascular imaging systems with rotary-pullback catheters owing to mechanical friction between the catheter torque cable and sheath [3537]. Recently, the distal scanning endoscopes with miniature micromotors have been designed to deal with artifacts associated with bending and NURD of proximally rotated catheters [38]. However, it is a challenging and expensive task to miniaturize high-speed motors since the relatively large size of the motor limits the catheter to access the vascular stenosis [36,39]. Consequently, the catheter-based motion artifacts might still exist in clinical scenarios, which degrade image quality and hinder identification of various tissue types as well as the quantification of tissue properties.

4.3 Future direction related to deep learning

In recent years, deep learning (DL) has been dominating in the field of medical imaging by significantly facilitating the performance of multiple tasks [40]. DL methodologies have gained interest as potential solutions for efficient processing and analysis of large datasets since they have outstanding advantages in image processing, identification, and interpretation over non-learning ones.

Chen et al. [41] reported a method for correcting motion artifacts in optical resolution photoacoustic microscopic (OR-PAM) images by convolutional neural network (CNN). To the best of our knowledge, it is the first study on motion correction by DL in the field of PAI. They constructed a CNN with which they post-processed the maximum amplitude projection (MAP) image of OR-PAM to alleviate motion artifacts. However, motion suppression for multi-frame PAT by DL has not been explored yet up to the writing of this paper. One of the core bottlenecks is the lack of reliable experimental training data. The size and quality of the data sets determine the efficacy of DL methodologies. Three sources for obtaining training data have been adopted currently, clinical in vivo data, phantom data, and simulation data. Previous study on reconstruction of a single PA image by DL mostly uses phantom data and simulation data for network training and verification. This is attributed to the lack of the ground truth information on the underlying tissue optical properties or the initial pressure distribution when acquiring experimental measurements [42]. Motion artifacts in IVPA imaging are unobservable in a single cross-sectional image, therefore multi-frame analysis is essential, which further improves the requirements to the amount of the training data. In addition, the underlying dynamics of the arterial vessels are generally unknown in in vivo experimental settings. Another challenging issue is that it is a laborious task to manufacture a dynamic phantom mimicking the arterial vessel in a complex clinical scenario. Computer simulation enables generation of a large amount of simulation data in a flexible way. However, it is still difficult to meet the requirements of the amount and variety of training data used in motion artifacts correction by DL. Specifically, the algorithms trained by the simulation data might fail in a clinical scenario because of the fact that the simulation data suffer from several shortcomings compared with the data distribution in reality, such as domain gap, sparsity or a selection bias [42]. By considering the similarity of IVUS and IVPA in their imaging principles and image contents, transfer learning may be a possible solution in the future. Namely, the motion correction algorithm is trained on a large data set of in vivo IVUS pullback studies, whereas a smaller experimental PA data set is utilized to finely adjust the neural network to the experimental data distribution.

5. Conclusion

In summary, motion artifacts removal is essential for stabilizing IVPA image sequences that suffer from cardiac motion. Our method is driven by raw PA signals alone that are acquired in successive slices by ultrasonic transducers during continuous pullback of the catheter. We grouped PA signals by clustering to select the signal frames that are collected in the same cardiac phase. The results demonstrate an improvement in the visualization of the L-mode views, where saw-tooth shaped vessel wall boundaries are considerably smoothed. The quantitative evaluation metrics including AD and AIFD can be increased by up to 50% after motion correction, indicating the reduction in the misalignment between vascular cross-sections. In addition, our method outperforms the image-based gating in correcting motion artifacts with a low computational burden.

Funding

National Natural Science Foundation of China (62071181).

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography: realistic limitations and technical solutions,” J. Appl. Phys. 127(23), 230903 (2020). [CrossRef]  

2. S.S.S. Choi and A. Mandelis, “Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations,” J. Biomed. Opt. 24(8), 080902 (2019). [CrossRef]  

3. Y. Li, J. Chen, and Z. Chen, “Multimodal intravascular imaging technology for characterization of atherosclerosis,” J. Innov. Opt. Health Sci. 13(01), 2030001 (2020). [CrossRef]  

4. H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019). [CrossRef]  

5. H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009). [CrossRef]  

6. R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

7. R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015). [CrossRef]  

8. X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017). [CrossRef]  

9. M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018). [CrossRef]  

10. Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018). [CrossRef]  

11. N. Torbati, A. Ayatollahi, and P. Sadeghipour, “Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients,” IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019). [CrossRef]  

12. R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021). [CrossRef]  

13. Z. Sun and Q. Yan, “An off-line gating method for suppressing motion artifacts in ICUS sequence,” Comput. Biol. Med. 40(11-12), 860–868 (2010). [CrossRef]  

14. Z. Sun and M. Li, “Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images,” J. Med. Imag. Health In. 6(7), 1787–1793 (2016). [CrossRef]  

15. A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011). [CrossRef]  

16. S.K. Nadkarni, D.R. Boughner, and A. Fenster, “Image-based cardiac gating for three-dimensional intravascular ultrasound,” Ultrasound in Medicine & Biology 31(1), 53–63 (2005). [CrossRef]  

17. S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008). [CrossRef]  

18. J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014). [CrossRef]  

19. A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019). [CrossRef]  

20. M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).

21. A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012). [CrossRef]  

22. M. Xu and L.V. Wang, “Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,” IEEE Trans. Med. Imaging 21(7), 814–822 (2002). [CrossRef]  

23. J. Chung and L. Nguyen, “Motion estimation and correction in photoacoustic tomographic reconstruction,” SIAM J. Imaging Sci. 10(1), 216–242 (2017). [CrossRef]  

24. J. Poudel, L. Yang, and M.A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019). [CrossRef]  

25. R. Manwar, M. Zafar, and Q. Xu, “Signal and image processing in biomedical photoacoustic imaging: a review,” Optics 2(1), 1–24 (2021). [CrossRef]  

26. Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019). [CrossRef]  

27. T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

28. S. Zhou and Z. Xu, “Automatic grayscale image segmentation based on affinity propagation clustering,” Pattern Anal Applic 23(1), 331–348 (2020). [CrossRef]  

29. C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019). [CrossRef]  

30. Z. Sun, D. Han, and Y. Yuan, “2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal,” Comput. Biol. Med. 76, 60–68 (2016). [CrossRef]  

31. Z. Sun, Y. Yuan, and D. Han, “A computer-based simulator for intravascular photoacoustic images,” Comput. Biol. Med. 81, 176–187 (2017). [CrossRef]  

32. S.L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef]  

33. Y. Xu, H. Xue, and G. Hu, “Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms,” Chinese J. Biomed. Eng. 26(6), 874–878 (2007).

34. D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017). [CrossRef]  

35. J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020). [CrossRef]  

36. E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018). [CrossRef]  

37. O.O. Ahsen, H.C. Lee, M.G. Giacomelli, Z. Wang, K. Liang, T.H. Tsai, B. Potsaid, H. Mashimo, and J.G. Fujimoto, “Correction of rotational distortion for catheter-based en face OCT and OCT angiography,” Opt. Lett. 39(20), 5973–5976 (2014). [CrossRef]  

38. J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019). [CrossRef]  

39. F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020). [CrossRef]  

40. H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021). [CrossRef]  

41. X. Chen, W. Qi, and L. Xi, “Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy,” Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019). [CrossRef]  

42. J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021). [CrossRef]  

References

  • View by:

  1. W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography: realistic limitations and technical solutions,” J. Appl. Phys. 127(23), 230903 (2020).
    [Crossref]
  2. S.S.S. Choi and A. Mandelis, “Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations,” J. Biomed. Opt. 24(8), 080902 (2019).
    [Crossref]
  3. Y. Li, J. Chen, and Z. Chen, “Multimodal intravascular imaging technology for characterization of atherosclerosis,” J. Innov. Opt. Health Sci. 13(01), 2030001 (2020).
    [Crossref]
  4. H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
    [Crossref]
  5. H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
    [Crossref]
  6. R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).
  7. R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
    [Crossref]
  8. X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
    [Crossref]
  9. M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
    [Crossref]
  10. Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
    [Crossref]
  11. N. Torbati, A. Ayatollahi, and P. Sadeghipour, “Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients,” IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019).
    [Crossref]
  12. R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
    [Crossref]
  13. Z. Sun and Q. Yan, “An off-line gating method for suppressing motion artifacts in ICUS sequence,” Comput. Biol. Med. 40(11-12), 860–868 (2010).
    [Crossref]
  14. Z. Sun and M. Li, “Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images,” J. Med. Imag. Health In. 6(7), 1787–1793 (2016).
    [Crossref]
  15. A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
    [Crossref]
  16. S.K. Nadkarni, D.R. Boughner, and A. Fenster, “Image-based cardiac gating for three-dimensional intravascular ultrasound,” Ultrasound in Medicine & Biology 31(1), 53–63 (2005).
    [Crossref]
  17. S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
    [Crossref]
  18. J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
    [Crossref]
  19. A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019).
    [Crossref]
  20. M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).
  21. A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012).
    [Crossref]
  22. M. Xu and L.V. Wang, “Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,” IEEE Trans. Med. Imaging 21(7), 814–822 (2002).
    [Crossref]
  23. J. Chung and L. Nguyen, “Motion estimation and correction in photoacoustic tomographic reconstruction,” SIAM J. Imaging Sci. 10(1), 216–242 (2017).
    [Crossref]
  24. J. Poudel, L. Yang, and M.A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019).
    [Crossref]
  25. R. Manwar, M. Zafar, and Q. Xu, “Signal and image processing in biomedical photoacoustic imaging: a review,” Optics 2(1), 1–24 (2021).
    [Crossref]
  26. Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
    [Crossref]
  27. T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).
  28. S. Zhou and Z. Xu, “Automatic grayscale image segmentation based on affinity propagation clustering,” Pattern Anal Applic 23(1), 331–348 (2020).
    [Crossref]
  29. C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019).
    [Crossref]
  30. Z. Sun, D. Han, and Y. Yuan, “2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal,” Comput. Biol. Med. 76, 60–68 (2016).
    [Crossref]
  31. Z. Sun, Y. Yuan, and D. Han, “A computer-based simulator for intravascular photoacoustic images,” Comput. Biol. Med. 81, 176–187 (2017).
    [Crossref]
  32. S.L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
    [Crossref]
  33. Y. Xu, H. Xue, and G. Hu, “Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms,” Chinese J. Biomed. Eng. 26(6), 874–878 (2007).
  34. D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
    [Crossref]
  35. J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
    [Crossref]
  36. E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
    [Crossref]
  37. O.O. Ahsen, H.C. Lee, M.G. Giacomelli, Z. Wang, K. Liang, T.H. Tsai, B. Potsaid, H. Mashimo, and J.G. Fujimoto, “Correction of rotational distortion for catheter-based en face OCT and OCT angiography,” Opt. Lett. 39(20), 5973–5976 (2014).
    [Crossref]
  38. J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
    [Crossref]
  39. F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
    [Crossref]
  40. H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021).
    [Crossref]
  41. X. Chen, W. Qi, and L. Xi, “Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy,” Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019).
    [Crossref]
  42. J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021).
    [Crossref]

2021 (4)

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

R. Manwar, M. Zafar, and Q. Xu, “Signal and image processing in biomedical photoacoustic imaging: a review,” Optics 2(1), 1–24 (2021).
[Crossref]

H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021).
[Crossref]

J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021).
[Crossref]

2020 (5)

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
[Crossref]

S. Zhou and Z. Xu, “Automatic grayscale image segmentation based on affinity propagation clustering,” Pattern Anal Applic 23(1), 331–348 (2020).
[Crossref]

W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography: realistic limitations and technical solutions,” J. Appl. Phys. 127(23), 230903 (2020).
[Crossref]

Y. Li, J. Chen, and Z. Chen, “Multimodal intravascular imaging technology for characterization of atherosclerosis,” J. Innov. Opt. Health Sci. 13(01), 2030001 (2020).
[Crossref]

2019 (9)

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

S.S.S. Choi and A. Mandelis, “Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations,” J. Biomed. Opt. 24(8), 080902 (2019).
[Crossref]

N. Torbati, A. Ayatollahi, and P. Sadeghipour, “Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients,” IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019).
[Crossref]

A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019).
[Crossref]

C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019).
[Crossref]

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

J. Poudel, L. Yang, and M.A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019).
[Crossref]

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

X. Chen, W. Qi, and L. Xi, “Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy,” Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019).
[Crossref]

2018 (3)

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
[Crossref]

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

2017 (4)

X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
[Crossref]

D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
[Crossref]

J. Chung and L. Nguyen, “Motion estimation and correction in photoacoustic tomographic reconstruction,” SIAM J. Imaging Sci. 10(1), 216–242 (2017).
[Crossref]

Z. Sun, Y. Yuan, and D. Han, “A computer-based simulator for intravascular photoacoustic images,” Comput. Biol. Med. 81, 176–187 (2017).
[Crossref]

2016 (2)

Z. Sun, D. Han, and Y. Yuan, “2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal,” Comput. Biol. Med. 76, 60–68 (2016).
[Crossref]

Z. Sun and M. Li, “Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images,” J. Med. Imag. Health In. 6(7), 1787–1793 (2016).
[Crossref]

2015 (1)

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

2014 (2)

J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
[Crossref]

O.O. Ahsen, H.C. Lee, M.G. Giacomelli, Z. Wang, K. Liang, T.H. Tsai, B. Potsaid, H. Mashimo, and J.G. Fujimoto, “Correction of rotational distortion for catheter-based en face OCT and OCT angiography,” Opt. Lett. 39(20), 5973–5976 (2014).
[Crossref]

2013 (1)

S.L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
[Crossref]

2012 (1)

A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012).
[Crossref]

2011 (1)

A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
[Crossref]

2010 (1)

Z. Sun and Q. Yan, “An off-line gating method for suppressing motion artifacts in ICUS sequence,” Comput. Biol. Med. 40(11-12), 860–868 (2010).
[Crossref]

2009 (1)

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

2008 (1)

S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
[Crossref]

2007 (1)

Y. Xu, H. Xue, and G. Hu, “Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms,” Chinese J. Biomed. Eng. 26(6), 874–878 (2007).

2005 (1)

S.K. Nadkarni, D.R. Boughner, and A. Fenster, “Image-based cardiac gating for three-dimensional intravascular ultrasound,” Ultrasound in Medicine & Biology 31(1), 53–63 (2005).
[Crossref]

2002 (1)

M. Xu and L.V. Wang, “Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,” IEEE Trans. Med. Imaging 21(7), 814–822 (2002).
[Crossref]

Abouei, E.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Ahsen, O.O.

Amaral, M.E.

M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
[Crossref]

Anastasio, M.A.

J. Poudel, L. Yang, and M.A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019).
[Crossref]

J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
[Crossref]

Ayatollahi, A.

N. Torbati, A. Ayatollahi, and P. Sadeghipour, “Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients,” IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019).
[Crossref]

Bajaj, R.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Barnett, R.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Barthur, A.

M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
[Crossref]

Baumbach, A.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Boughner, D.R.

S.K. Nadkarni, D.R. Boughner, and A. Fenster, “Image-based cardiac gating for three-dimensional intravascular ultrasound,” Ultrasound in Medicine & Biology 31(1), 53–63 (2005).
[Crossref]

Bourantas, C.V.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Brecht, H.P.

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

Carlier, S.

S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
[Crossref]

Casey, M.E.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Chang, J.H.

M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).

Chen, J.

Y. Li, J. Chen, and Z. Chen, “Multimodal intravascular imaging technology for characterization of atherosclerosis,” J. Innov. Opt. Health Sci. 13(01), 2030001 (2020).
[Crossref]

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

Chen, N.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

Chen, S.

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

Chen, T.

T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

Chen, W.Y.

J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
[Crossref]

Chen, X.

X. Chen, W. Qi, and L. Xi, “Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy,” Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019).
[Crossref]

Chen, Z.

Y. Li, J. Chen, and Z. Chen, “Multimodal intravascular imaging technology for characterization of atherosclerosis,” J. Innov. Opt. Health Sci. 13(01), 2030001 (2020).
[Crossref]

Choi, S.S.S.

S.S.S. Choi and A. Mandelis, “Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations,” J. Biomed. Opt. 24(8), 080902 (2019).
[Crossref]

Choi, W.

W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography: realistic limitations and technical solutions,” J. Appl. Phys. 127(23), 230903 (2020).
[Crossref]

Chung, J.

J. Chung and L. Nguyen, “Motion estimation and correction in photoacoustic tomographic reconstruction,” SIAM J. Imaging Sci. 10(1), 216–242 (2017).
[Crossref]

Claussen, J.

A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012).
[Crossref]

Conjusteau, A.

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

Crake, T.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Cua, M.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Dai, Q.

H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021).
[Crossref]

Davoudi, N.

A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019).
[Crossref]

Deán-Ben, X.

X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
[Crossref]

Deán-Ben, X.L.

A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019).
[Crossref]

Delrio, S.P.

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

Deng, H.

H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021).
[Crossref]

Dijkstra, J.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Dreher, K.

J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021).
[Crossref]

Emelianov, S.

D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
[Crossref]

Ermilov, S.A.

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

Feng, N.

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

Fenster, A.

S.K. Nadkarni, D.R. Boughner, and A. Fenster, “Image-based cardiac gating for three-dimensional intravascular ultrasound,” Ultrasound in Medicine & Biology 31(1), 53–63 (2005).
[Crossref]

Fontaine, K.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Fronheiser, M.

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

Fujimoto, J.G.

Fulton, R.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Gao, F.

T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

Garcia-Barnés, J.

A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
[Crossref]

Geva, T.

M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
[Crossref]

Giacomelli, M.G.

Gil, D.

A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
[Crossref]

Gong, X.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

Gottschalk, S.

X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
[Crossref]

Graeser, M.

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Granada, J.F.

S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
[Crossref]

Griese, F.

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Grohl, J.

J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021).
[Crossref]

Han, D.

Z. Sun, Y. Yuan, and D. Han, “A computer-based simulator for intravascular photoacoustic images,” Comput. Biol. Med. 81, 176–187 (2017).
[Crossref]

Z. Sun, D. Han, and Y. Yuan, “2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal,” Comput. Biol. Med. 76, 60–68 (2016).
[Crossref]

Hernàndez-Sabaté, A.

A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
[Crossref]

Hohert, G.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Hu, D.

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

Hu, G.

Y. Xu, H. Xue, and G. Hu, “Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms,” Chinese J. Biomed. Eng. 26(6), 874–878 (2007).

Huang, X.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Jacques, S.L.

S.L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
[Crossref]

Jain, A.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Jones, J.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Kakadiaris, I.A.

S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
[Crossref]

Kang, J.

M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).

Karpiouk, A.B.

D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
[Crossref]

Kench, P.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Kilic, Y.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Kim, C.

W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography: realistic limitations and technical solutions,” J. Appl. Phys. 127(23), 230903 (2020).
[Crossref]

Kim, M.

M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).

Knopp, T.

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Koh, T.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Kruger, R.A.

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

Lam, R.B.

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

Lam, S.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Lane, P.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Latus, S.

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Lee, A.M.D.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Lee, H.C.

Li, C.

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

Li, J.

T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

Li, K.

J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
[Crossref]

Li, M.

Z. Sun and M. Li, “Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images,” J. Med. Imag. Health In. 6(7), 1787–1793 (2016).
[Crossref]

Li, T.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

Li, X.

J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
[Crossref]

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

Li, Y.

Y. Li, J. Chen, and Z. Chen, “Multimodal intravascular imaging technology for characterization of atherosclerosis,” J. Innov. Opt. Health Sci. 13(01), 2030001 (2020).
[Crossref]

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

Liang, K.

Lin, R.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

Liu, C.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

Liu, T.

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

Liu, Y.

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

Liu, Z.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

Lu, T.

T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

Lu, Y.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Lutz, M.

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Ma, C.

H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021).
[Crossref]

Ma, L.

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

Ma, Y.

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

MacAulay, C.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Maier-Hein, L.

J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021).
[Crossref]

Mandelis, A.

S.S.S. Choi and A. Mandelis, “Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations,” J. Biomed. Opt. 24(8), 080902 (2019).
[Crossref]

Manwar, R.

R. Manwar, M. Zafar, and Q. Xu, “Signal and image processing in biomedical photoacoustic imaging: a review,” Optics 2(1), 1–24 (2021).
[Crossref]

Martí, E.

A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
[Crossref]

Mashimo, H.

Maslov, K.

J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
[Crossref]

Mathur, A.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Mavadia-Shukla, J.

J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
[Crossref]

Mc Larney, B.

X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
[Crossref]

Miao, S.

T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

Moghari, M.H.

M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
[Crossref]

Moon, J.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Morgan, T.G.

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

Mulnix, T.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Nadkarni, S.K.

S.K. Nadkarni, D.R. Boughner, and A. Fenster, “Image-based cardiac gating for three-dimensional intravascular ultrasound,” Ultrasound in Medicine & Biology 31(1), 53–63 (2005).
[Crossref]

Naghavi, M.

S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
[Crossref]

Nguyen, L.

J. Chung and L. Nguyen, “Motion estimation and correction in photoacoustic tomographic reconstruction,” SIAM J. Imaging Sci. 10(1), 216–242 (2017).
[Crossref]

Ntziachristos, V.

A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012).
[Crossref]

O’Malley, S.M.

S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
[Crossref]

Oh, D.

W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography: realistic limitations and technical solutions,” J. Appl. Phys. 127(23), 230903 (2020).
[Crossref]

Onofrey, J.A.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Oraevsky, A.A.

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

Pahlevaninezhad, H.

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

Panin, V.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Parker, M.K.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Peng, J.

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

Picot, P.

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

Potsaid, B.

Poudel, J.

J. Poudel, L. Yang, and M.A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019).
[Crossref]

Powell, A.J.

M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
[Crossref]

Pugliese, F.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Qi, W.

X. Chen, W. Qi, and L. Xi, “Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy,” Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019).
[Crossref]

Qiao, H.

H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021).
[Crossref]

Ramasamy, A.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Razansky, D.

A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019).
[Crossref]

X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
[Crossref]

A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012).
[Crossref]

Reinecke, D.R.

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

Ren, S.

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Ron, A.

A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019).
[Crossref]

Sadeghipour, P.

N. Torbati, A. Ayatollahi, and P. Sadeghipour, “Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients,” IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019).
[Crossref]

Schellenberg, M.

J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021).
[Crossref]

Schlaefer, A.

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Schluter, M.

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Serruys, P.W.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Shoham, S.

X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
[Crossref]

Song, L.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

Song, S.

T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

Song, T.K.

M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).

Su, R.

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

Sun, C.X.

C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019).
[Crossref]

Sun, M.

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

Sun, Z.

Z. Sun, Y. Yuan, and D. Han, “A computer-based simulator for intravascular photoacoustic images,” Comput. Biol. Med. 81, 176–187 (2017).
[Crossref]

Z. Sun, D. Han, and Y. Yuan, “2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal,” Comput. Biol. Med. 76, 60–68 (2016).
[Crossref]

Z. Sun and M. Li, “Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images,” J. Med. Imag. Health In. 6(7), 1787–1793 (2016).
[Crossref]

Z. Sun and Q. Yan, “An off-line gating method for suppressing motion artifacts in ICUS sequence,” Comput. Biol. Med. 40(11-12), 860–868 (2010).
[Crossref]

Tang, H.

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

Taruttis, A.

A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012).
[Crossref]

Thornton, M.M.

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

Torbati, N.

N. Torbati, A. Ayatollahi, and P. Sadeghipour, “Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients,” IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019).
[Crossref]

Torii, R.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Tsai, T.H.

Tufaro, V.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Vanderlaan, D.

D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
[Crossref]

Wang, H.

C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019).
[Crossref]

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

Wang, L.V.

J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
[Crossref]

M. Xu and L.V. Wang, “Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,” IEEE Trans. Med. Imaging 21(7), 814–822 (2002).
[Crossref]

Wang, W.

C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019).
[Crossref]

Wang, Z.

Xi, L.

X. Chen, W. Qi, and L. Xi, “Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy,” Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019).
[Crossref]

Xia, J.

J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
[Crossref]

Xu, M.

M. Xu and L.V. Wang, “Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,” IEEE Trans. Med. Imaging 21(7), 814–822 (2002).
[Crossref]

Xu, Q.

R. Manwar, M. Zafar, and Q. Xu, “Signal and image processing in biomedical photoacoustic imaging: a review,” Optics 2(1), 1–24 (2021).
[Crossref]

Xu, Y.

Y. Xu, H. Xue, and G. Hu, “Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms,” Chinese J. Biomed. Eng. 26(6), 874–878 (2007).

Xu, Z.

S. Zhou and Z. Xu, “Automatic grayscale image segmentation based on affinity propagation clustering,” Pattern Anal Applic 23(1), 331–348 (2020).
[Crossref]

Xue, H.

Y. Xu, H. Xue, and G. Hu, “Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms,” Chinese J. Biomed. Eng. 26(6), 874–878 (2007).

Yan, M.

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

Yan, Q.

Z. Sun and Q. Yan, “An off-line gating method for suppressing motion artifacts in ICUS sequence,” Comput. Biol. Med. 40(11-12), 860–868 (2010).
[Crossref]

Yang, L.

J. Poudel, L. Yang, and M.A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019).
[Crossref]

Yang, Y.

C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019).
[Crossref]

Yeager, D.

D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
[Crossref]

Yoo, Y.

M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).

Yuan, Y.

Z. Sun, Y. Yuan, and D. Han, “A computer-based simulator for intravascular photoacoustic images,” Comput. Biol. Med. 81, 176–187 (2017).
[Crossref]

Z. Sun, D. Han, and Y. Yuan, “2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal,” Comput. Biol. Med. 76, 60–68 (2016).
[Crossref]

Zafar, M.

R. Manwar, M. Zafar, and Q. Xu, “Signal and image processing in biomedical photoacoustic imaging: a review,” Optics 2(1), 1–24 (2021).
[Crossref]

Zhang, J.

J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
[Crossref]

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

Zhang, Q.

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

Zhao, H.

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

Zheng, W.

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

Zhou, S.

S. Zhou and Z. Xu, “Automatic grayscale image segmentation based on affinity propagation clustering,” Pattern Anal Applic 23(1), 331–348 (2020).
[Crossref]

Appl. Sci. (2)

A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, “Self-gated respiratory motion rejection for optoacoustic tomography,” Appl. Sci. 9(13), 2737 (2019).
[Crossref]

Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, “Quantitative reconstruction of absorption coefficients for photoacoustic tomography,” Appl. Sci. 9(6), 1187 (2019).
[Crossref]

Chem. Soc. Rev. (1)

X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017).
[Crossref]

Chinese J. Biomed. Eng. (1)

Y. Xu, H. Xue, and G. Hu, “Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms,” Chinese J. Biomed. Eng. 26(6), 874–878 (2007).

Comput. Biol. Med. (3)

Z. Sun, D. Han, and Y. Yuan, “2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal,” Comput. Biol. Med. 76, 60–68 (2016).
[Crossref]

Z. Sun, Y. Yuan, and D. Han, “A computer-based simulator for intravascular photoacoustic images,” Comput. Biol. Med. 81, 176–187 (2017).
[Crossref]

Z. Sun and Q. Yan, “An off-line gating method for suppressing motion artifacts in ICUS sequence,” Comput. Biol. Med. 40(11-12), 860–868 (2010).
[Crossref]

Entropy (1)

C.X. Sun, Y. Yang, H. Wang, and W. Wang, “A clustering approach for motif discovery in ChIP-Seq dataset,” Entropy 21(8), 802 (2019).
[Crossref]

IEEE Trans. Biomed. Eng. (1)

J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, “A novel synchronous micro motor for intravascular ultrasound imaging,” IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
[Crossref]

IEEE Trans. Inform. Technol. Biomed. (1)

S.M. O’Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, “Image-based gating of intravascular ultrasound pullback sequences,” IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
[Crossref]

IEEE Trans. Med. Imaging (3)

N. Torbati, A. Ayatollahi, and P. Sadeghipour, “Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients,” IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019).
[Crossref]

H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, “Motion correction in optical resolution photoacoustic microscopy,” IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
[Crossref]

M. Xu and L.V. Wang, “Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,” IEEE Trans. Med. Imaging 21(7), 814–822 (2002).
[Crossref]

IEEE Trans. Ultrason., Ferroelect., Freq. Contr (1)

D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, “Real-time intravascular ultrasound and photoacoustic imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
[Crossref]

IEEE Trans. Ultrason., Ferroelect., Freq. Contr. (1)

A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, “Image-based cardiac phase retrieval in intravascular ultrasound sequences,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
[Crossref]

Int J Cardiovasc Imaging (1)

R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, “A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images,” Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
[Crossref]

J. Appl. Phys. (1)

W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography: realistic limitations and technical solutions,” J. Appl. Phys. 127(23), 230903 (2020).
[Crossref]

J. Biomed. Opt. (6)

S.S.S. Choi and A. Mandelis, “Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations,” J. Biomed. Opt. 24(8), 080902 (2019).
[Crossref]

H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009).
[Crossref]

J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” J. Biomed. Opt. 19(1), 16003 (2014).
[Crossref]

E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, “Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration,” J. Biomed. Opt. 23(01), 1 (2018).
[Crossref]

A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, “Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart,” J. Biomed. Opt. 17(1), 016009 (2012).
[Crossref]

H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(04), 040901 (2021).
[Crossref]

J. Innov. Opt. Heal. Sci. (1)

J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
[Crossref]

J. Innov. Opt. Health Sci. (1)

Y. Li, J. Chen, and Z. Chen, “Multimodal intravascular imaging technology for characterization of atherosclerosis,” J. Innov. Opt. Health Sci. 13(01), 2030001 (2020).
[Crossref]

J. Med. Imag. Health In. (1)

Z. Sun and M. Li, “Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images,” J. Med. Imag. Health In. 6(7), 1787–1793 (2016).
[Crossref]

J. Nucl. Med. (1)

Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, “Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data,” J. Nucl. Med. 59(9), 1480–1486 (2018).
[Crossref]

Magn. Reson. Med. (1)

M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, “Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation,” Magn. Reson. Med. 80(1), 181–189 (2018).
[Crossref]

Opt. Lett. (1)

Optics (1)

R. Manwar, M. Zafar, and Q. Xu, “Signal and image processing in biomedical photoacoustic imaging: a review,” Optics 2(1), 1–24 (2021).
[Crossref]

Pattern Anal Applic (1)

S. Zhou and Z. Xu, “Automatic grayscale image segmentation based on affinity propagation clustering,” Pattern Anal Applic 23(1), 331–348 (2020).
[Crossref]

Photoacoustics (1)

J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: A review,” Photoacoustics 22, 100241 (2021).
[Crossref]

Phys. Med. Biol. (2)

S.L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
[Crossref]

J. Poudel, L. Yang, and M.A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019).
[Crossref]

PLoS One (1)

F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, “In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation,” PLoS One 15(3), e0230821 (2020).
[Crossref]

Quant. Imag. Med. Surg. (1)

R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, “Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo,” Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
[Crossref]

SIAM J. Imaging Sci. (1)

J. Chung and L. Nguyen, “Motion estimation and correction in photoacoustic tomographic reconstruction,” SIAM J. Imaging Sci. 10(1), 216–242 (2017).
[Crossref]

Ultrasound in Medicine & Biology (1)

S.K. Nadkarni, D.R. Boughner, and A. Fenster, “Image-based cardiac gating for three-dimensional intravascular ultrasound,” Ultrasound in Medicine & Biology 31(1), 53–63 (2005).
[Crossref]

Vis. Comput. Ind. Biomed. Art (1)

X. Chen, W. Qi, and L. Xi, “Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy,” Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019).
[Crossref]

Other (3)

R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, “Dynamic optical angiography of mouse anatomy using radial projections,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).

M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study,” Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).

T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic diagram of IVPA imaging. (a) Longitudinal view (L-view) of a vessel segment to be imaged; (b) Transversal view of an imaging plane; (c) Stacked transversal images in a temporal order; (d) Generation of cardiac cycle-dependent motion artifacts in a time-axis view of pullback volumes.
Fig. 2.
Fig. 2. Flowchart of complete process of our method.
Fig. 3.
Fig. 3. Geometry of vascular cross-sections which are numbered as I, II, III, and IV from left to right.
Fig. 4.
Fig. 4. Schematic diagram of L-views of IVPA pullback volumes.
Fig. 5.
Fig. 5. Simulated transversal images of four vessel cross-sections shown in Fig. 3. (a) Images representing the normalized optical energy deposition; (b) Images representing the normalized PA signals reaching the detector.
Fig. 6.
Fig. 6. Images randomly selected from the pullback sequences of transversal images representing the initial pressure distribution, which are reconstructed directly from the simulated PA signals by TR without motion suppression. (a) Phantom I; (b) Phantom II; (c) Phantom III; (d) Phantom IV.
Fig. 7.
Fig. 7. The gating results of the proposed method for four IVPA sequences. (a) Vertical L-views; (b) Horizontal L-views. The gated sequences are artificially stretched to match the same physical lengths as the non-gated ones in order to facilitate comparison due to the fact that there are much more images/mm present in the non-gated sequences, while the gated sequences have much fewer images due to the gating.
Fig. 8.
Fig. 8. Visualization of DMs obtained from the non-gated (left column) and gated (right column) image sequences. (a) Phantom I; (b) Phantom II; (c) Phantom III; (d) Phantom IV
Fig. 9.
Fig. 9. AD functions with respect to frame intervals obtained from the non-gated and gated image sequences for (a) phantom I, (b) II, (c) III, and (d) IV. There are 60, 65, 80, and 72 frames in four gated sequences, respectively.
Fig. 10.
Fig. 10. L-views of the IVPA sequence of phantom I processed with our method and the image-based gating method. (a) Non-gated sequence; (b) Gated sequence with our method; (c) Gated sequence with the image-based method.
Fig. 11.
Fig. 11. DMs and ADs of the reconstructed image sequences for phantom I. There are 60 and 55 frames in the gated sequences obtained by our method and image-based gating, respectively. (a) Visualization of the DMs obtained from non-gated sequence (left), gated sequence by our method (middle), and gated sequence by image-based gating (right); (b) AD with respect to frame interval.

Tables (4)

Tables Icon

Table 1. Parameters of optical and acoustic properties of vessel phantoms for forward IVPA simulation

Tables Icon

Table 2. AIFDs of the image sequences before and after motion correction.

Tables Icon

Table 3. AIFDs of the image sequences for phantom I in the case of different thresholds of the objective function.

Tables Icon

Table 4. Quantitative metrics of the image sequences of phantom I processed by two methods.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

P k = [ p 11 p 12 p 1 N p 21 p 22 p 2 N p M 1 p M 2 p M N ] ,
C = [ ρ 11 ρ 12 ρ 1 W ρ 21 ρ 22 ρ 2 W ρ W 1 ρ W 2 ρ W W ] ,
ρ i j = |Cov ( P i , P j ) | D ( P i ) D ( P j ) .
s t ( i , k ) = | f i f k | 2 ,
r t + 1 ( i , k ) = { s t ( i , k ) max k s .t . k k { a t ( i , k ) + r t ( i , k ) } , i k s t ( i , k ) max k s .t . k k { s t ( i , k ) } , i = k
a t + 1 ( i , k ) = { min { 0 , r t + 1 ( k , k ) + i s .t . i { i , k } max ( 0 , r t + 1 ( i , k ) ) } , i k i s .t . i k max { 0 , r t + 1 ( i , k ) } , i = k ,
{ r ^ t + 1 ( i , k ) = λ r t ( i , k ) + ( 1 λ ) r t + 1 ( i , k ) a ^ t + 1 ( i , k ) = λ a t ( i , k ) + ( 1 λ ) a t + 1 ( i , k ) ,
e = a ^ t + 1 ( i , k ) + r ^ t + 1 ( i , k ) .
J  =  f h i , i = 1 H | f g i | 2 ,
S ( n ) = { A sin ( π R n ) exp ( π α n ) + S 0 , 0 <  n < 1 / R S ( n 1 / R ) , n 1 / R ,
{ l i , n + 1 = l i , n λ S ( n + 1 ) / S ( n ) θ i , n + 1 = θ i , n ,
d i , j = 1 k = 1 W i d t h l = 1 H e i g h t | I i ( k , l ) μ i | | I j ( k , l ) μ j | k = 1 W i d t h l = 1 H e i g h t [ I i ( k , l ) μ i ] 2 k = 1 W i d t h l = 1 H e i g h t [ I j ( k , l ) μ j ] 2 ,
D ( k ) = 1 W k m = 1 W k d m , m + k
D = 1 W i = 1 W j = 1 W d i , j ,

Metrics