Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Breaking the limitations with sparse inputs by variational frameworks (BLIss) in terahertz super-resolution 3D reconstruction

Open Access Open Access

Abstract

Data acquisition, image processing, and image quality are the long-lasting issues for terahertz (THz) 3D reconstructed imaging. Existing methods are primarily designed for 2D scenarios, given the challenges associated with obtaining super-resolution (SR) data and the absence of an efficient SR 3D reconstruction framework in conventional computed tomography (CT). Here, we demonstrate BLIss, a new approach for THz SR 3D reconstruction with sparse 2D data input. BLIss seamlessly integrates conventional CT techniques and variational framework with the core of the adapted Euler-Elastica-based model. The quantitative 3D image evaluation metrics, including the standard deviation of Gaussian, mean curvatures, and the multi-scale structural similarity index measure (MS-SSIM), validate the superior smoothness and fidelity achieved with our variational framework approach compared with conventional THz CT modal. Beyond its contributions to advancing THz SR 3D reconstruction, BLIss demonstrates potential applicability in other imaging modalities, such as X-ray and MRI. This suggests extensive impacts on the broader field of imaging applications.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Gaining increasing attention in recent years, terahertz (THz) imaging has emerged its worth as a versatile tool for various applications such as non-destructive inspection [13], security screening [47], and non-contact testing [811]. The unique properties of THz radiation, including its low photon energy, make it suitable for safe use in bio-related scenarios [12,13]. Additionally, its capability to penetrate optically opaque materials allows extracting internal geometric and material information of tested objects [1418]. THz imaging includes a broad spectrum of modalities, with pulsed THz imaging emerging as one of the most prevalent and widely employed techniques in this field. This is often associated with a THz time-domain spectroscopy (THz-TDS) system [13,1923], enabling the acquisition of ultrafast responses within the domain of light-matter interactions across temporal, spatial, and spectral dimensions. Recently, it has paved the way for a range of applications, including analysis of conformational behaviors [24], ultrafast molecular dynamics [25], dielectric responses [26], and electrical properties [27].

Leveraging these advantages, pulsed THz imaging finds practical use in visualizing the internal structure of 3D objects. Among the frequently employed pulsed THz imaging techniques, computed tomography (CT) represents an effective computational method. This technique provides cross-sectional images (slices) reconstructed from a sequence of projected images taken at various projection angles by inverse Radon transform $\hat {\mathcal {R}}$ (IRT) [9,28]. While X-ray CT has found extensive use in various real-world applications, its ionizing characteristics and high penetration capacity inherently restrict its utility, particularly in the visualization of composite materials, soft dielectrics, or liquids. Furthermore, the limited differentiation of material responses within the X-ray spectrum poses challenges in identifying the chemical components present within objects [9,29]. In light of these limitations, the introduction of pulsed THz CT imaging, first proposed in 2002 [30], emerged as a promising supplement to traditional X-ray CT.

While pulsed THz CT imaging exhibits considerable potential as a supplement with non-ionizing characteristics and sensitive material differentiation, two primary concerns are (i) the protracted process of data acquisition with limited scanning resolution by THz-TDS; and (ii) Fraunhofer diffraction during the THz wave propagation through the object [30,31]. For clarification, the term resolution typically refers to the number of pixels in digital imaging rather than diffraction-limited resolution. This is because all datasets are collected by THz-TDS and processed under the pixel-based scanning resolution. Concerning (i), THz-TDS has historically relied on mechanical time-delay scanning. However, non-mechanical time-domain sampling methods, such as asynchronous optical sampling (ASOPS) introduced in 2005 [32] and electronically controlled optical sampling (ECOPS) introduced in 2010 [33], bring a promising alternative. These techniques can significantly reduce the data acquisition time for each single-cycle THz pulse, shifting from hour/minute scales to milliseconds. Despite advancements, the implementation of ASOPS and ECOPS requires costly or bulky femtosecond lasers. Moreover, projection data acquisition for THz 3D CT imaging still involves mechanical operations, such as raster scanning and object rotation, which constrain the resolution of projection image due to scanning resolution, impacting subsequent 3D reconstruction. Recently, a high-throughput alternative is the use of terahertz focal-plane arrays (THz-FPAs) [34]. These detector arrays, designed based on field-effect transistors [35], microbolometers [36], and plasmonic nanoantennas [37,38], offer the potential to bypass mechanical operations. Nevertheless, the shift towards employing THz-FPAs introduces a unique set of challenges. These arrays require scalable high-speed electronics, a considerable increase in computational power, and increased system complexity, all of which depend on the number of THz detectors in use, and notably, this transition could be cost-intensive. Besides, while THz-FPAs can speed up data acquisition, the quality of reconstructed 3D images would be affected by the uniformity of each device and, most importantly, the concern (ii) related to diffraction effects with noise.

Addressing these issues without costly upgrades to the imaging system, algorithm-based methods show promise in directly enhancing low-resolution (LR) images into super-resolution (SR) images, tackling concerns (i) and (ii). For instance, by using prior knowledge of the degradation based on the specific point spread function (PSF) measured [3941] or simulated [4244] from the THz imaging system, a variety of SR algorithms have been explored, including Richardson-Lucy algorithm [45] and PSF deconvolution [46]. These methods have demonstrated success in improving spatial resolution and reducing the diffractive limit, although challenges in rendering image edge and texture details remain. Nowadays, deep-learning-based SR methods have been increasingly recognized as a prominent approach. By establishing an end-to-end mapping correlation between LR and HR images, they have shown advantages in comprehensive learning ability and the overall effectiveness of image reconstruction after training. Examples include the convolutional neural network (CNN) with its variants, the very deep super-resolution (VDSR) method, the super-resolution network for multiple degradations (SRMD), and generative adversarial networks (GAN) [14,16,17,4754]. However, the application of deep-learning-based methods in THz single-image super-resolution (SISR) faces several challenges, including the complex THz image degradation process, limited efficiency in deblurring and denoising procedures, and the issue of gradient vanishing during deep network training. In a nutshell, both algorithm-based and deep-learning-based methods demonstrate their potential in improving the spatial resolution of THz imaging, but their primary focus on 2D image processing often overlooks the global characteristics of the 3D post-reconstruction. Moreover, the need for sufficient datasets and costly workstations with GPUs for model training presents another challenge.

In this work, we focus on improving 3D THz reconstruction by integrating variational frameworks with conventional CT methods. Here, we introduce BLIss (as delineated in Fig. 1) with the aim of facilitating rapid and smooth SR 3D reconstruction, even when working with a limited pool of LR images. In detail, this framework integrates traditional CT techniques, including signal processing (e.g., projections, sinograms, and slices by IRT $\hat {\mathcal {R}}$), with the underpinning mathematical principles of variational formulations, i.e., Willmore-based formulation and Euler-Elastica-based formulation. To assess the quality and fidelity of the reconstructed results by our BLIss, the conventional evaluation metrics, e.g. mean-squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM), which are commonly employed in 2D image assessment, are not sufficient due to the inconsistency of data type in 3D. To address this, we introduce an approach that quantifies the global smoothness stability of 3D surface quality using concepts from discrete geometry. This approach involves calculating the standard deviation of Gaussian and mean curvatures for the triangular meshes of the reconstructed surfaces. In addition, we evaluate the fidelity of reconstruction when using limited input data. To achieve this, we extend our analysis to incorporate the calculation of the multi-scale structural similarity index measure (MS-SSIM), a 3D structural similarity metric that builds upon the foundation of the 2D SSIM. Our proposed variational framework effectively addresses our concerns (i) and (ii), presenting a pathway to rapid and high-precision 3D THz imaging. This advancement holds significant promise for non-destructive sensing and inspection applications.

 figure: Fig. 1.

Fig. 1. Schematic illustration (top) and visualization (bottom) of BLIss in our THz SR 3D reconstruction: conventional methodologies are integrated with proposed variational models to improve quality and resolution (highlighted nodes within the inner color).

Download Full Size | PDF

2. Methods

2.1 Experimental setup for data acquisition

The experimental setup is shown in Fig. 2(a). The THz 3D imaging system records time-resolved THz signals for each voxel using an ASOPS THz-TDS system (TERA ASOPS, MenloSystems). The pulsed THz wave is generated by a THz photoconductive antenna (PCA) emitter and is directed through four plane-convex polypropylene (PP) THz lenses, which focus it onto the sample with a beam size of 1.5 mm before refocusing it onto a THz PCA detector. The photocurrent signal is subsequently amplified by a transimpedance amplifier (TIA) and digitized by an analog-to-digital converter (ADC) processing unit. The data collection process accelerates due to the usage of two asynchronous femtosecond (fs) Er-doped fiber lasers in the ASOPS system. The repetition rate of the pump laser is 100 MHz + 50 Hz ($f_{\mathrm {pump}}$), the probe laser operates at 100 MHz ($f_{\mathrm {probe}}$), and the fixed offset frequency ($\Delta f = f_{\mathrm {pump}} - f_{\mathrm {probe}}$) is 50 Hz. The sampling rate ($n_{\mathrm {sr}}$) is 10 MHz, which allows for a data acquisition time of approximately 20 ms per pulse with a temporal resolution of 50 fs. Each THz signal pulse in the dataset corresponds to 5000 sampling points per voxel and spans 250 ps, as instantiated in Fig. 2(b). This allows for a dynamic range of approximately 38.26 dB from 0.1 to 4 THz, as shown in Fig. 2(c). To facilitate the acquisition of a multi-projection temporal THz dataset for further THz CT purposes, the objects under test are positioned on a rotational stage $\mathcal {R}$ equipped with motorized stages in two directions (length $\mathcal {L}$ and height $\mathcal {H}$).

 figure: Fig. 2.

Fig. 2. (a) Schematic diagram of the THz 3D imaging system setup using transmission geometry (TIA: transimpedance amplifier). (b) Illustrations of one THz pulse without averaging in the time domain. (c) Illustrations of normalized THz power spectrum obtained through fast Fourier transform (FFT) operation from (b).

Download Full Size | PDF

2.2 Conventional THz CT imaging

Conventional techniques for THz 3D CT imaging usually involve a series of steps to construct 3D images from the collected THz dataset. The initial step in the THz 3D CT imaging process is the acquisition of projection data. In this step, the Time-MAX projection technique is one of the most fundamental methods in THz 3D CT imaging. Time-MAX involves extracting the maximum field strength from each time-resolved THz signal trace, which is then used to represent the projection of the scanned object at a particular projection angle [14,15]. This method effectively reduces the temporal dimensionality of the THz signals to a 2D projection image (see Fig. 3(a)). The data obtained from Time-MAX is primarily associated with object thickness, surface contour, and material contrast, providing potential information into the spatial distribution of composite materials within the objects. Next, a sinogram (see Fig. 3(b)) is generated by stacking all the 2D projection images at different rotation angles in a 3D array. The sinogram visualizes the internal structure of the object from multiple angles and forms the basis for image reconstruction. The horizontal axis represents the spatial location along the projection, and the vertical axis corresponds to the projection angle. The value at each pixel in the sinogram indicates the THz signal intensity at the corresponding location and angle. The final step in the conventional THz CT imaging process is the inverse Radon transform (IRT) $\hat {\mathcal {R}}$ [15,55]. This mathematical procedure reconstructs an image $f(\mathbf {x})$ from its projections

$$f(\mathbf{x}) \approx \sum_{k} \left( \mathcal{R}f(t, \theta_{k} * h) (\langle \mathbf{x}, \mathbf{n}_{\theta_{k}} \rangle) \right)$$
where the Radon transform $\mathcal {R}f(t, \theta )$ is the integral of $f$ along the line $L_{t, \theta } = \left \{ \mathbf {x}: \langle \mathbf {x}, \mathbf {n}_{\theta } \rangle \right.$ $\left. = t \right \}$ whose distance from the origin is $t$ and whose direction is orthogonal to the unit vector $\mathbf {n}_{\theta } = (\cos {\theta }, \sin {\theta })$ with the convolution kernel $h$ satisfied $\hat {h}(\omega ) = |\omega |$. Applied to the THz field, IRT takes the sinogram as input and reconstructs a 2D cross-section (slice) of the object (see Fig. 3(c)). The value of each reconstructed pixel represents the normalized attenuated intensity, rescaled to the interval $[0, 1]$ based on the minimum and maximum across all pixels. Then, a 3D volume image of the object can be obtained (see Fig. 4(a)) after stacking all 2D slices along the vertical direction and selecting an appropriate threshold. Here, the threshold can be determined based on characteristics of the histogram, such as peaks and valleys. These stages are fundamental to conventional THz 3D CT imaging. While they can yield decent reconstructed THz 3D images, they are still limited by computational cost and complexity, the need for extensive data processing, and potential issues related to artifacts in the reconstructed image. Nevertheless, they continue to serve as the cornerstone methods in THz 3D CT imaging, thanks to their firmly established theoretical foundation and widespread implementation.

 figure: Fig. 3.

Fig. 3. Illustrations of the conventional techniques for THz 3D CT Imaging: (a) projection by Time-MAX; (b) sinogram; (c) cross section (slice) by IRT.

Download Full Size | PDF

2.3 Variational frameworks and numerical algorithm with ADMM

In this section, we present a variational approach based on the adapted Euler-Elastica-based formulation and implement it through a numerical algorithm using the alternating direction method of multipliers (ADMM). The variational approach, often referred to as the mathematical approach, and the physical modeling approach are two distinct methodologies used in the field of computational modeling. The physical modeling approach is domain-specific and directly relies on the physical laws governing a particular system, whereas variational frameworks are primarily rooted in mathematical principles and optimization techniques. The variational frameworks operate based on the principle of optimizing a function, often referred to as a functional, which typically represents energy/cost forms associated with a physical or computational system. Through the optimization of this functional, we can infer the optimal parameters or states of the system. This approach is more versatile and applicable to a broad spectrum of problems.

Our proposed variational framework is particularly designed to address the issues of prolonged data acquisition time and resolution constraints by scanning resolution with diffraction effects (referred to as concerns (i) and (ii) in Section 1). This framework has the ability to handle LR input slices, even if they are limited in number, and adeptly reconstruct them into a smooth SR surface. Reconstruction from these $s$ slices $\{\Pi _{i}\}_{i = 1, \ldots, s}$ can be framed as a variational model, which is motivated by the works [15,18,5658]. Mathematically, the goal is to find the (local) minimum $u^{*}$ that satisfies our new adapted Euler-Elastica-based formulation

$$\begin{aligned} \begin{aligned} & \arg \min_{u} \mathscr {m} \int_{\Omega} \left( \frac{\varepsilon}{2} |\nabla u|^{2} + \frac{1}{\varepsilon} W(u) \right) \, \mathrm{d} \Omega + \frac{\mathscr {n}}{2\varepsilon} \int_{\Omega} \left( \varepsilon \Delta u - \frac{1}{\varepsilon} W'(u) \right)^{2} \, \mathrm{d} \Omega \\ & \mbox{ s.t. } u^{in} \le u \le u^{ex}. \end{aligned} \end{aligned}$$

Here, $u \in \Omega \subset \mathbb {R}^{3}$ is the phase-field function to provide a 3D representation of the object, governed by the profile function $q$; $q$ is the profile function designed to be piecewise, taking a value of 1 within the object, half on the boundary, and 0 outside the object, to maintain continuity and smoothness in the phase-field function; $\nabla, \Delta$ is the gradient and Laplacian operator (note that, $\Delta = \operatorname {div}\nabla$ with divergence operator $\operatorname {div}$); $\varepsilon$ signifies the phase-field variable and is linked to the thickness parameter; $u^{in} \le u \le u^{ex}$ is the linear obstacle restrictions, ensuring that the shape of the input data is preserved where the phase-field profiles, $u^{in}$ and $u^{ex}$, are derived from the interior $\{\Pi _{s}^{in}\}$ and exterior region $\{\Pi _{s}^{ex}\}$ through the profile function $q$; and $W(u)$ is the double-well potential defined by $W(u) = \frac {1}{2}u^{2}(1-u)^{2}$, which has two minima related to the profile function $q$ and its first derivative is $W'(u) = u(u-1)(2u-1)$. Specifically, each pair of coefficients $(\mathscr {m}, \mathscr {n})$ is used to control the type of formulation applied. If $\mathscr {m} = 1, \mathscr {n} = 0$, the formulation is perimeter-based $\mathscr {P_{\varepsilon }}$ from the perimeter energy, focusing on preserving the edge features of the object. Conversely, if $\mathscr {m} = 0, \mathscr {n} = 1$, the Willmore-based formulation $\mathscr {W_{\varepsilon }}$ is applied, which is related to the mean curvature from Willmore energy and promotes smoothness in the output. Remark that, without certain restrictions, this formulation could lead to an output that takes the shape of a sphere, as the sphere is the configuration that minimizes the Willmore energy. Lastly, when $\mathscr {m} = 1, \mathscr {n} = 1$, the Euler-Elastica-based formulation $\mathscr {E_{\varepsilon }}$ is used, which is an optimal combination of edge preservation and smoothness. For detailed mathematical preliminaries and clarifications, please refer to Sec. 1 of Supplement 1 [15,18,57,58].

To address our model as represented in (1), we employ the alternating direction method of multiplier (ADMM). This approach breaks down complex optimization problems into smaller, more manageable parts for swifter numerical approximation. We further implement the linear obstacle restriction through orthogonal projection, thereby managing the inequality with improved flexibility, which is achieved by

$$u^{in} \leq u \leq u^{ex} \quad \leftrightsquigarrow \quad \max(\min(u, u^{ex}), u^{in}).$$
With $\mathbf {w} = \nabla u$, the augmented Lagrangian functional is then expressed using a penalty parameter $\rho > 0$ and a Lagrange multiplier $\boldsymbol {\lambda }$, as follows
$$\begin{aligned}\begin{aligned} \mathcal{L}^{\rho} (u, \mathbf{w}; \boldsymbol{\lambda}) = \int_{\Omega} & \left[ \mathscr {m} \left( \frac{\varepsilon}{2} |\mathbf{w}|^{2} + \frac{1}{\varepsilon} W(u) \right) + \frac{\mathscr {n}}{2\varepsilon} \left( \varepsilon \operatorname{div} \mathbf{w} - \frac{1}{\varepsilon} W'(u) \right)^{2} \right. \\ & + \left. \frac{\rho}{2} \left| \nabla u - \mathbf{w} + \rho^{{-}1} \boldsymbol{\lambda} \right|^{2} - \frac{\boldsymbol{\lambda}^{2}}{2 \rho} \right] \, \mathrm{d} \Omega. \end{aligned} \end{aligned}$$

Hence, the application of the ADMM involves a three-step process: solving two subproblems $u, \mathbf {w}$ and updating one multiplier $\boldsymbol {\lambda }$

$$\begin{aligned} \left\{ \begin{array}{l} u_{k+1} = \arg \min _{u} \int_{\Omega} \left[ \frac{\mathscr {m}}{\varepsilon} W(u) - \frac{\mathscr {n}}{\varepsilon} (\operatorname{div} \mathbf{w}_{k}) W'(u) + \frac{\mathscr {n}}{2\varepsilon^{3}} (W'(u))^{2} \right. \\ \left. \qquad \qquad \qquad \qquad + \frac{\rho}{2} \left| \nabla u - \mathbf{w}_{k} + \rho^{{-}1} \boldsymbol{\lambda}_{k} \right|^{2} \right] \, \mathrm{d} \Omega \\ \mathbf{w}_{k+1} = \arg \min _{\mathbf{w}} \int_{\Omega} \left[ \frac{\mathscr {m}\varepsilon}{2} |\mathbf{w}|^{2} + \frac{\mathscr {n}\varepsilon}{2} (\operatorname{div} \mathbf{w})^{2} - \frac{\mathscr {n}}{\varepsilon} (\operatorname{div} \mathbf{w}) W'(u_{k+1}) \right. \\ \left. \qquad \qquad \qquad \qquad + \frac{\rho}{2} \left| \mathbf{w} - \nabla u_{k+1} - \rho^{{-}1} \boldsymbol{\lambda}_{k} \right|^{2} \right] \, \mathrm{d} \Omega \\ \boldsymbol{\lambda}_{k+1} = \boldsymbol{\lambda}_{k} + \rho \left( \nabla u_{k+1}-\mathbf{w}_{k+1} \right) \end{array} \right.. \end{aligned}$$

Utilizing the numerical Euler semi-implicit discretization scheme with a time step $\tau$, the optimal solutions for the $u$- and $\mathbf {w}$-subproblems in (4) can be updated as follows:

$$\begin{aligned}\begin{aligned} u_{k+1} = (I - \tau \rho \Delta)^{{-}1} & \left[ u_{k} + \tau \left( - \rho \nabla \cdot \mathbf{w}_{k} + \nabla \cdot \boldsymbol{\lambda}_{k} - \frac{\mathscr {m}}{\varepsilon} W'(u_{k}) \right. \right. \\ & \left. \left. + \frac{\mathscr {n}}{\varepsilon} \operatorname{div} \mathbf{w}_{k} W^{\prime\prime}(u_{k}) + \frac{\mathscr {n}}{\varepsilon^{3}} W'(u_{k}) W^{\prime\prime}(u_{k}) \right) \right], \end{aligned} \end{aligned}$$
$$\begin{aligned} \mathbf{w}_{k+1} = (I + \mathscr {m} \varepsilon \tau - \mathscr {n} \varepsilon \tau \Delta)^{{-}1} \left[ \mathbf{w}_{k} + \frac{\mathscr {n} \tau}{\varepsilon} W'(u_{k+1}) \Delta \mathbf{w}_{k} + \tau \rho \left( \mathbf{w}_{k} - \nabla u_{k+1} - \rho^{{-}1} \boldsymbol{\lambda}_{k} \right) \right]. \end{aligned}$$

Lastly, recall that the multiplier $\boldsymbol {\lambda }$ will be updated by $\boldsymbol {\lambda }_{k+1} = \boldsymbol {\lambda }_{k} + \rho \left ( \nabla u_{k+1}-\mathbf {w}_{k+1} \right ).$ This marks the completion of one iteration of the ADMM method where the comprehensive algorithm is presented in Algorithm 1 (more details of derivations in Sec. 2 of Supplement 1).

Tables Icon

Algorithm 1. Alternating Direction Method of Multipliers (ADMM)

3. Experimental results with discussion and application

In this section, we showcase experimental results obtained using our BLIss. To facilitate our experiments, these objects are fabricated economically using a fused deposition modeling (FDM) 3D printer and high impact polystyrene (HIPS) material (see optical images in Sec. 3 of Supplement 1). The choice of HIPS material is for its low absorption in the THz spectrum [59]. This selection helps prevent considerable degradation in retrieved time-resolved THz signals.

Building on the conventional techniques for THz 3D CT imaging outlined in Section 2.2, we generate preliminary LR reconstructions by stacking the designated $s$ slices for three objects, as illustrated in Fig. 4(a) ($s = 218$ for Deer), 5(a) ($s = 258$ for Polarbear), and 6(a) ($s = 138$ for Skull). For the technical specifics, each slice is obtained via IRT from $\mathcal {R} = 60$ projections, adopting a rotational angle of 6 degrees. Both $\mathcal {L}$ and $\mathcal {H}$ directions in each projection utilized the raster scanning with a step size of $\Delta \mathcal {L} = \Delta \mathcal {H} = 0.25$ mm. The scanning ranges are set as: $\mathcal {L} = 50$ mm and $\mathcal {H} = 70$ mm for Deer; $\mathcal {L} = 60$ mm and $\mathcal {H} = 70$ mm for Polarbear; and both $\mathcal {L} = 40$ mm and $\mathcal {H} = 40$ mm for Skull. Regarding our experiment using Algorithm 1, parameters are set to $\tau = \varepsilon ^{3.5}$, $\rho = 1$, and $\varepsilon = 3/\max {(N_{x}, N_{y}, N_{z})}$ aligning with the maximum pixel number from the scanning resolution across the three axes of initial input $u_{0}$ where $N_{x} = N_{y} = \mathcal {L}/\Delta \mathcal {L}$ and $N_{z} = \mathcal {H}/\Delta \mathcal {H} \geq s$. The algorithm terminates when the disparity between successive iterative outcomes falls below a designated threshold, with most processes concluding in fewer than ten iterations. Subsequently, the smooth SR results can be reconstructed by the Willmore-based formulation $\mathscr {W}_{\varepsilon }$ ($\mathscr {m} = 0, \mathscr {n} = 1$) and by the Euler-Elastica-based formulation $\mathscr {E}_{\varepsilon }$ ($\mathscr {m} = 1, \mathscr {n} = 1$), as illustrated objects in Fig. 4(e),4(i) for Deer, Fig. 5(e),5(i) for Polarbear, and Fig. 6(e), 6(i) for Skull. Furthermore, intuitively, fewer data are required for reconstruction, translating to reduced acquisition time. To underscore the flexibility of our framework, we reconstruct using $\mathscr {W}_{\varepsilon }$ and $\mathscr {E}_{\varepsilon }$ with fewer input slices for reducing the data acquisition time. By setting gaps at intervals of 1, 3, and 5, the results are illustrated in Fig. 4(f)–4(h), 4(j)–4(l) for Deer, Fig. 5(f)–5(h), 5(j)–5(l) for Polarbear, and Fig. 6(f)–6(h), 6(j)–6(l) for Skull. All computational tasks are executed on MATLAB_R2023b running on macOS Sonoma 14.0 with an Apple M1 Max Chip and 64 GB Memory.

 figure: Fig. 4.

Fig. 4. Illustrations of Deer object: (a) conventional LR reconstruction with 218 slices where smooth SR reconstruction (e) by Willmore-based $\mathscr {W_{\varepsilon }}$ and (i) by Euler-Elastica-based model $\mathscr {E_{\varepsilon }}$; (b) input 109 slices with 1 gap where reconstruction (f) by $\mathscr {W_{\varepsilon }}$ and (j) by $\mathscr {E_{\varepsilon }}$; (c) input 55 slices with 3 gaps where reconstruction (g) by $\mathscr {W_{\varepsilon }}$ and (k) by $\mathscr {E_{\varepsilon }}$; (d) input 37 slices with 5 gaps where reconstruction (h) by $\mathscr {W_{\varepsilon }}$ and (l) by $\mathscr {E_{\varepsilon }}$.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Illustrations of Polarbear object: (a) conventional LR reconstruction with 258 slices where smooth reconstruction (e) by Willmore-based $\mathscr {W_{\varepsilon }}$ and (i) by Euler-Elastica-based formulation $\mathscr {E_{\varepsilon }}$; (b) input 129 slices with 1 gap where reconstruction (f) by $\mathscr {W_{\varepsilon }}$ and (j) by $\mathscr {E_{\varepsilon }}$; (c) input 65 slices with 3 gaps where reconstruction (g) by $\mathscr {W_{\varepsilon }}$ and (k) by $\mathscr {E_{\varepsilon }}$; (d) input 43 slices with 5 gaps where reconstruction (h) by $\mathscr {W_{\varepsilon }}$ and (l) by $\mathscr {E_{\varepsilon }}$.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Illustrations of Skull object: (a) conventional LR reconstruction with 138 slices where smooth SR reconstruction (e) by Willmore-based $\mathscr {W_{\varepsilon }}$ and (i) by Euler-Elastica-based formulation $\mathscr {E_{\varepsilon }}$; (b) input 69 slices with 1 gap where reconstruction (f) by $\mathscr {W_{\varepsilon }}$ and (j) by $\mathscr {E_{\varepsilon }}$; (c) input 35 slices with 3 gaps where reconstruction (g) by $\mathscr {W_{\varepsilon }}$ and (k) by $\mathscr {E_{\varepsilon }}$; (d) input 23 slices with 5 gaps where reconstruction (h) by $\mathscr {W_{\varepsilon }}$ and (l) by $\mathscr {E_{\varepsilon }}$.

Download Full Size | PDF

In our implementation, we preserve the dimensional consistency between inputs and outputs for each object, maintaining isotropic resolution even when using fewer lateral samples. Specifically, our algorithm records and processes all data within the LR 3D matrix space $\mathbb {R}^{3}_{N_{x} \times N_{y} \times N_{z}}$, ensuring a consistent dimensional framework throughout. All results are then reconstructed using isosurface, a 3D triangular mesh representation well-suited for our framework. This method allows for the precise reconstruction of details, effectively demonstrating the isotropic resolution capabilities of our SR algorithm. However, due to the inconsistency of triangular mesh surfaces reconstructed by two formulations $\mathscr {W}_{\varepsilon }$ and $\mathscr {E}_{\varepsilon }$ in our BLIss (i.e. variations in the number of vertices and faces), a fair quantitative comparison is challenging. To address this, we introduce a metric to evaluate the performance of surfaces reconstructed by both formulations, $\mathscr {W}_{\varepsilon }$ and $\mathscr {E}_{\varepsilon }$, serving as the indicator of the global smoothness of the surfaces. This metric is to calculate the standard deviation of Gaussian curvatures (GC) $\sigma _{{\textrm {GC}}}$ and of mean curvatures (MC) $\sigma _{{\textrm {MC}}}$ based on the theory from discrete geometry (details in Sec. 4A of Supplement 1). In Table 1, we provide the values of $\sigma _{{\textrm {GC}}}$ and $\sigma _{{\textrm {MC}}}$ for various input slices and gap(s). For identical input slices, both $\sigma _{{\textrm {GC}}}$ and $\sigma _{{\textrm {MC}}}$ metrics indicate that results obtained using $\mathscr {E}_{\varepsilon }$ exhibit superior smoothness when compared to $\mathscr {W}_{\varepsilon }$. This is attributed to the ability of $\mathscr {E}_{\varepsilon }$ to automatically handle topologically complex geometries, which overcomes the limitations of $\mathscr {W}_{\varepsilon }$. Note that a lower value in these curvature computations signifies better smoothness, indicating less variability in the curvature. Besides, as the number of input slices decreases for the same object, both $\sigma _{{\textrm {GC}}}$ and $\sigma _{{\textrm {MC}}}$ values also decrease using the same formulation. This trend implies that a reduced number of input slices leads to the loss of finer details in the reconstruction. Even though our framework visually demonstrates its capacity for reconstructed results using limited input slices, we still require accuracy verification to confirm the fidelity compared to the reconstructed results using full input slices by the same formulation. To achieve this accuracy verification, we employ the multi-scale structural similarity index measure (MS-SSIM) to ascertain the precision of reconstructions from sparse slices (details in Sec. 4B of Supplement 1). Then, the result obtained with full input slices serves as the benchmark for these evaluations, which is indicated as one in Table 1. When comparing results using the same formulation with different input slices, the MS-SSIM values consistently fall within the range of 0.97 to 0.99. These high MS-SSIM scores underscore the 3D structural fidelity of our reconstructions, even when using only $1/6$ of the original slices in our BLIss. Additionally, we present the average elapsed time for each iteration during the reconstruction of the three objects in Table 1. Here, our framework achieves an average elapsed time per iteration within the scale of seconds to show the efficiency of our proposed BLIss.

Tables Icon

Table 1. Comparison metrics for three reconstructed objects Deer, Polarbear, and Skull with various input slices and gap(s) by Willmore-based $\mathscr {W}_{\varepsilon }$ and Euler-Elastica-based $\mathscr {E}_{\varepsilon }$ formulations: the standard deviation of Gaussian curvatures $\sigma _{{\textrm {GC}}}$ and of mean curvatures $\sigma _{{\textrm {MC}}}$; the average elapsed time of each iteration (seconds/iteration or s/iter); and the multi-scale structural similarity index measure (MS-SSIM) where $\uparrow$ ($\downarrow$): higher (lower) is better.

In expanding the discussion on our findings, we emphasize the real-world applications and contributions of our research to the field of THz imaging. Our results pave the way for advanced diagnostic and analytical techniques in non-invasive medical imaging, material characterization, and security screening [47,12,13]. By enhancing the resolution and fidelity of THz imaging, our approach could significantly improve the detection and analysis of bioinformatics, the multi-scale information inside materials, and the inspection of art and historical artifacts without causing damage. These applications underscore the potential of our findings to revolutionize various industries by providing safer, more accurate, and more detailed imaging capabilities. To illustrate the practical application of this research, we have chosen a peanut as our test subject. The peanut exhibits a textured surface and contains two inner seeds along with an air gap. Refer to the optical image of the halved peanut parts after data acquisition in Fig. 7(a). We first present the results in Fig. 7(b)–7(c), using the Time-MAX information to reconstruct the outer shell. Furthermore, our approach can be adapted to other data types obtained from the time-resolved THz pulse, which contains hyperspectral information in a broad frequency range. One notable aspect is the phase in the frequency domain, which offers distinct advantages, such as depth information, material properties, and better image contrast. Therefore, we derive the phase information in the frequency domain by $\Phi (\omega ) = \tan ^{-1}(b/a),$ from the Fourier amplitude $\mathcal {F}(f(t)) = F(\omega ) = a(\omega ) + ib(\omega ) = \int f(t) e^{-2 \pi i \omega t} \, \mathrm {d}t$ of the time-resolved signals $f(t)$ by Fourier Transform (FT) $\mathcal {F}$. A representative projection of the unwrapped phase at a chosen frequency of 1.0010 THz is illustrated in Fig. 7(d). The selected frequency is determined by the aim of enhancing material contrast, focusing on fat content for the peanut kernels and fiber for the outer shell, within the THz frequency range. When applying our framework with only $1/3$ of the input slices, reducing 3-fold data acquisition time, as shown in Fig. 7(e), we obtain the SR results in Fig. 7(f) using $\mathscr {W}_{\varepsilon }$ and Fig. 7(g) using $\mathscr {E}_{\varepsilon }$. Notably, results produced by $\mathscr {E}_{\varepsilon }$ exhibit superior smoothness, as indicated by lower values of $\sigma _{{\textrm {GC}}}$ and $\sigma _{{\textrm {MC}}}$. Additionally, the reconstruction process can be completed in seconds using a commonplace laptop, reaffirming the precision and efficiency of our framework with very limited dataset inputs.

 figure: Fig. 7.

Fig. 7. Illustrations of (a) the optical image of the unveiled peanut kernels. (b) projected Time-MAX THz 2D image. (c) reconstructed THz 3D image of peanuts from (b). (d) representative projection of unwrapped phase at 1.0010 THz. (e) 46 slices extracted from (d) by IRT. (f) reconstruction of peanut kernels by Willmore model $\mathscr {W}_{\varepsilon }$ from (e) with higher computational time (more iterations) and lower smoothness. (g) optimized reconstruction of kernels by Euler-Elastica model $\mathscr {E}_{\varepsilon }$ from (e) with lower computational time (less iterations) and higher smoothness.

Download Full Size | PDF

4. Conclusion

In our pursuit of advancing pulsed THz 3D SR reconstruction, we propose a promising variational framework based on the $\mathscr {W}_{\varepsilon }$ and an adapted Euler-Elastica-based formulation $\mathscr {E}_{\varepsilon }$. With our 3D representation utilizing the implicit phase-field function in conjunction with the designed variational framework, we achieve super-resolution results, surpassing the resolution limitations imposed by coarse input slices due to scanning resolution (see Sec. 5 of Supplement 1). Furthermore, this framework is versatile, capable of processing a reduced number of LR input slices, speeding up both data acquisition and computational processes. The adaptability to a range of data types boosts its value, particularly in the context of pulsed THz imaging. To substantiate our results and provide a quantitative assessment of their quality, we employ two metrics: global smoothness, measured by the standard deviation of Gaussian curvatures $\sigma _{{\textrm {GC}}}$ and of mean curvatures $\sigma _{{\textrm {MC}}}$, and accuracy verification using MS-SSIM. The capabilities of this framework are illustrated through its application to non-destructive THz imaging of peanuts, where it significantly improved the precision and clarity of capturing the internal structure.

Our work makes substantial strides in addressing long-standing challenges within THz 3D imaging, which have been intricately linked to wave propagation physics and hardware limitations. This advancement has the potential to transform the landscape of THz imaging, expanding its utility across both research and industry. Moreover, its implications extend to other domains, including X-ray and MRI imaging. By unlocking the capability to achieve higher resolution with sparse inputs, we are paving the way to unveil previously hidden details, offering profound insights into materials and processes that were once enigmatic.

While our findings lay a foundation for improved THz 3D reconstruction, the voyage of discovery is far from complete. One promising avenue involves adapting our framework to a broader range of materials and applications, extending its universal applicability. Beyond this, integrating deep learning approaches holds untapped potential to further optimize the imaging process, making it more adaptive and intuitive. Additionally, as hardware technology advances, it could be seamlessly integrated with our approach, further elevating resolution and image clarity (see Sec. 6 of Supplement 1). In the ever-evolving realm of THz imaging, consider our work as the cornerstone, setting the stage for future layers of transformative innovation.

Funding

National Science and Technology Council (112-2221-E-007-089-MY3).

Acknowledgments

The first author is grateful for partial support from the UoL-NTHU Dual PhD Programme.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Demo codes for BLIss are available in Ref. [60].

Supplemental document

See Supplement 1 for supporting content.

References

1. G. Ok, K. Park, H. J. Kim, et al., “High-speed terahertz imaging toward food quality inspection,” Appl. Opt. 53(7), 1406–1412 (2014). [CrossRef]  

2. M. Karaliūnas, K. E. Nasser, A. Urbanowicz, et al., “Non-destructive inspection of food and technical oils by terahertz spectroscopy,” Sci. Rep. 8(1), 18025 (2018). [CrossRef]  

3. B. Li, Z.-X. Sun, A.-K. Yang, et al., “Study on detection of the internal quality of pumpkin seeds based on terahertz imaging technology,” Food Measure 17(2), 1576–1585 (2023). [CrossRef]  

4. K. B. Cooper, R. J. Dengler, N. Llombart, et al., “Penetrating 3-D Imaging at 4- and 25-m Range Using a Submillimeter-Wave Radar,” IEEE Trans. Microwave Theory Tech. 56(12), 2771–2778 (2008). [CrossRef]  

5. K. B. Cooper, R. J. Dengler, N. Llombart, et al., “THz Imaging Radar for Standoff Personnel Screening,” IEEE Trans. Terahertz Sci. Technol. 1(1), 169–182 (2011). [CrossRef]  

6. Y. Cheng, Y. Wang, Y. Niu, et al., “Concealed object enhancement using multi-polarization information for passive millimeter and terahertz wave security screening,” Opt. Express 28(5), 6350–6366 (2020). [CrossRef]  

7. Y. Takida, K. Nawata, and H. Minamide, “Security screening system based on terahertz-wave spectroscopic gas detection,” Opt. Express 29(2), 2529–2537 (2021). [CrossRef]  

8. M. Bessou, B. Chassagne, J.-P. Caumes, et al., “Three-dimensional terahertz computed tomography of human bones,” Appl. Opt. 51(28), 6738–6744 (2012). [CrossRef]  

9. M. Jewariya, E. Abraham, T. Kitaguchi, et al., “Fast three-dimensional terahertz computed tomography using real-time line projection of intense terahertz pulse,” Opt. Express 21(2), 2423–2433 (2013). [CrossRef]  

10. K. Krügener, E.-M. Stübling, R. Jachim, et al., “THz tomography for detecting damages on wood caused by insects,” Appl. Opt. 58(22), 6063–6066 (2019). [CrossRef]  

11. Y. H. Tao, A. J. Fitzgerald, and V. P. Wallace, “Non-Contact, Non-Destructive Testing in Various Industrial Sectors with Terahertz Technology,” Sensors 20(3), 712 (2020). [CrossRef]  

12. D. M. Mittleman, “Twenty years of terahertz imaging [Invited],” Opt. Express 26(8), 9417–9431 (2018). [CrossRef]  

13. A. Leitenstorfer, A. S. Moskalenko, T. Kampfrath, et al., “The 2023 terahertz science and technology roadmap,” J. Phys. D: Appl. Phys. 56(22), 223001 (2023). [CrossRef]  

14. Y.-C. Hung, T.-H. Chao, P. Yu, et al., “Terahertz spatio-temporal deep learning computed tomography,” Opt. Express 30(13), 22523–22537 (2022). [CrossRef]  

15. Y. Zhang, K. Chen, and S.-H. Yang, “Fast Terahertz 3D Super-Resolution Surface Reconstruction by Variational Model from Limited Low-Resolution Sampling,” in 2022 47th International Conference on Infrared, Millimeter and Terahertz Waves (IRMMW-THz), (2022), pp. 1–2.

16. W.-T. Su, Y.-C. Hung, P.-J. Yu, et al., “Making the Invisible Visible: Toward High-Quality Terahertz Tomographic Imaging via Physics-Guided Restoration,” International Journal of Computer Vision pp. 1–20 (2023).

17. W.-T. Su, Y.-C. Hung, P.-J. Yu, et al., “Physics-Guided Terahertz Computational Imaging: A tutorial on state-of-the-art techniques,” IEEE Signal Process. Mag. 40(2), 32–45 (2023). [CrossRef]  

18. Y. Zhang, K. Chen, and S.-H. Yang, “Euler-Elastica Variational Model for Pulsed Terahertz 3D Imaging,” in CLEO 2023, (Optica Publishing Group, 2023), p. JTh2A.104.

19. B. B. Hu and M. C. Nuss, “Imaging with terahertz waves,” Opt. Lett. 20(16), 1716–1718 (1995). [CrossRef]  

20. D. M. Mittleman, S. Hunsche, L. Boivin, et al., “T-ray tomography,” Opt. Lett. 22(12), 904–906 (1997). [CrossRef]  

21. N. Karpowicz, H. Zhong, J. Xu, et al., “Comparison between pulsed terahertz time-domain imaging and continuous wave terahertz imaging,” Semicond. Sci. Technol. 20(7), S293–S299 (2005). [CrossRef]  

22. C. Jansen, S. Wietzke, O. Peters, et al., “Terahertz imaging: applications and perspectives,” Appl. Opt. 49(19), E48–E57 (2010). [CrossRef]  

23. M. Koch, D. M. Mittleman, J. Ornik, et al., “Terahertz time-domain spectroscopy,” Nat. Rev. Methods Primers 3(1), 48 (2023). [CrossRef]  

24. A. Markelz, S. Whitmire, J. Hillebrecht, et al., “THz time domain spectroscopy of biomolecular conformational modes,” Phys. Med. Biol. 47(21), 3797–3805 (2002). [CrossRef]  

25. V. Conti Nibali and M. Havenith, “New Insights into the Role of Water in Biological Function: Studying Solvated Biomolecules Using Terahertz Absorption Spectroscopy in Conjunction with Molecular Dynamics Simulations,” J. Am. Chem. Soc. 136(37), 12800–12807 (2014). [CrossRef]  

26. K. N. Okada, Y. Takahashi, M. Mogi, et al., “Terahertz spectroscopy on Faraday and Kerr rotations in a quantum anomalous Hall state,” Nat. Commun. 7(1), 12245 (2016). [CrossRef]  

27. K. Shiraga, Y. Ogawa, and N. Kondo, “Hydrogen Bond Network of Water around Protein Investigated with Terahertz and Infrared Spectroscopy,” Biophys. J. 111(12), 2629–2641 (2016). [CrossRef]  

28. G. T. Herman, Fundamentals of computerized tomography: image reconstruction from projections (Springer Science & Business Media, 2009).

29. D. J. Brenner and E. J. Hall, “Computed tomography–an increasing source of radiation exposure,” N. Engl. J. Med. 357(22), 2277–2284 (2007). [CrossRef]  

30. B. Ferguson, S. Wang, D. Gray, et al., “T-ray computed tomography,” Opt. Lett. 27(15), 1312–1314 (2002). [CrossRef]  

31. B. Recur, A. Younus, S. Salort, et al., “Investigation on reconstruction methods applied to 3D terahertz computed tomography,” Opt. Express 19(6), 5105–5117 (2011). [CrossRef]  

32. T. Yasui, E. Saneyoshi, and T. Araki, “Asynchronous optical sampling terahertz time-domain spectroscopy for ultrahigh spectral resolution and rapid data acquisition,” Appl. Phys. Lett. 87(6), 061101 (2005). [CrossRef]  

33. Y. Kim and D.-S. Yee, “High-speed terahertz time-domain spectroscopy based on electronically controlled optical sampling,” Opt. Lett. 35(22), 3715–3717 (2010). [CrossRef]  

34. X. Li, J. Li, Y. Li, et al., “High-throughput terahertz imaging: progress and challenges,” Light: Sci. Appl. 12(1), 233 (2023). [CrossRef]  

35. R. Al Hadi, H. Sherry, J. Grzyb, et al., “A 1 k-Pixel Video Camera for 0.7–1.1 Terahertz Imaging Applications in 65-nm CMOS,” IEEE J. Solid-State Circuits 47(12), 2999–3012 (2012). [CrossRef]  

36. N. Nemoto, N. Kanda, R. Imai, et al., “High-Sensitivity and Broadband, Real-Time Terahertz Camera Incorporating a Micro-Bolometer Array With Resonant Cavity Structure,” IEEE Trans. Terahertz Sci. Technol. 6(2), 175–182 (2016). [CrossRef]  

37. N. T. Yardimci and M. Jarrahi, “High sensitivity terahertz detection through large-area plasmonic nano-antenna arrays,” Sci. Rep. 7(1), 42667 (2017). [CrossRef]  

38. X. Li, D. Mengu, A. Ozcan, et al., “Super-Resolution Terahertz Imaging Through a Plasmonic Photoconductive Focal-Plane Array,” in CLEO 2023, (Optica Publishing Group, 2023), p. SM1N.2.

39. Q. Li, Q. Yin, R. Yao, et al., “Continuous-wave terahertz scanning image resolution analysis and restoration,” Opt. Eng. 49(3), 037007 (2010). [CrossRef]  

40. S.-H. Ding, Q. Li, R. Yao, et al., “High-resolution terahertz reflective imaging and image restoration,” Appl. Opt. 49(36), 6834–6839 (2010). [CrossRef]  

41. D. C. Popescu and A. D. Hellicar, “Point spread function estimation for a terahertz imaging system,” EURASIP J. Adv. Signal Process. 2010(1), 575817 (2010). [CrossRef]  

42. K. Ahi, “Mathematical Modeling of THz Point Spread Function and Simulation of THz Imaging Systems,” IEEE Trans. Terahertz Sci. Technol. 7(6), 747–754 (2017). [CrossRef]  

43. K. Ahi, S. Shahbazmohamadi, and N. Asadizanjani, “Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging,” Opt. Lasers Eng. 104, 274–284 (2018). [CrossRef]  

44. T. M. Wong, M. Kahl, P. Haring Bolívar, et al., “Computational image enhancement for frequency modulated continuous wave (FMCW) THz image,” J. Infrared, Millimeter, Terahertz Waves 40(7), 775–800 (2019). [CrossRef]  

45. Y. Li, L. Li, A. Hellicar, et al., “Super-resolution reconstruction of terahertz images,” in Terahertz for Military and Security Applications VI, vol. 6949 J. O. Jensen, H.-L. Cui, D. L. Woolard, R. J. Hwu, eds., International Society for Optics and Photonics (SPIE, 2008), p. 69490J.

46. K. Ahi, “A method and system for enhancing the resolution of terahertz imaging,” Measurement 138, 614–619 (2019). [CrossRef]  

47. J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016).

48. Z. Li, Z. Cen, and X. Li, “A terahertz image super-resolution reconstruction algorithm based on the deep convolutional neural network,” in AOPC 2017: Optical Sensing and Imaging Technology and Applications, vol. 10462 Y. Jiang, H. Gong, W. Chen, and J. Li, eds., International Society for Optics and Photonics (SPIE, 2017), p. 104621E.

49. K. Zhang, W. Zuo, and L. Zhang, “Learning a single convolutional super-resolution network for multiple degradations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 3262–3271.

50. Z. Long, T. Wang, C. You, et al., “Terahertz image super-resolution based on a deep convolutional neural network,” Appl. Opt. 58(10), 2731–2735 (2019). [CrossRef]  

51. W. Yibin, Z. Rongyue, X. Hong, et al., “Terahertz Image Super-Resolution Reconstruction of Passive Safety Inspection Based on Generative Adversarial Network,” in 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), (2019), pp. 22–27.

52. Q. Mao, Y. Zhu, C. Lv, et al., “Convolutional neural network model based on terahertz imaging for integrated circuit defect detections,” Opt. Express 28(4), 5000–5012 (2020). [CrossRef]  

53. Y. Wang, F. Qi, and J. Wang, “Terahertz image super-resolution based on a complex convolutional neural network,” Opt. Lett. 46(13), 3123–3126 (2021). [CrossRef]  

54. X. Yang, D. Zhang, Z. Wang, et al., “Super-resolution reconstruction of terahertz images based on a deep-learning network with a residual channel attention mechanism,” Appl. Opt. 61(12), 3363–3370 (2022). [CrossRef]  

55. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (SIAM, 2001).

56. M. Röger and R. Schätzle, “On a Modified Conjecture of De Giorgi,” Math. Z. 254(4), 675–714 (2006). [CrossRef]  

57. E. Bretin, F. Dayrens, and S. Masnou, “Volume reconstruction from slices,” SIAM Journal on Imaging Sciences 10(4), 2326–2358 (2017). [CrossRef]  

58. Y. Zhang, K. Chen, and S.-H. Yang, “Super-resolution surface reconstruction from few low-resolution slices,” SIAM J. Imaging Sci. 18, 1–37 (2024). [CrossRef]  

59. A. Siemion, “The Magic of Optics–An Overview of Recent Advanced Terahertz Diffractive Optical Elements,” Sensors 21(8), C1 (2021). [CrossRef]  

60. Y. Zhang, K. Chen, and S.-H. Yang, “Demo Codes for BLIss,” GitHub, 2024https://github.com/cyiyoo/BLIss.

Supplementary Material (1)

NameDescription
Supplement 1       Colored Supplement 1 for supporting content

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Demo codes for BLIss are available in Ref. [60].

60. Y. Zhang, K. Chen, and S.-H. Yang, “Demo Codes for BLIss,” GitHub, 2024https://github.com/cyiyoo/BLIss.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic illustration (top) and visualization (bottom) of BLIss in our THz SR 3D reconstruction: conventional methodologies are integrated with proposed variational models to improve quality and resolution (highlighted nodes within the inner color).
Fig. 2.
Fig. 2. (a) Schematic diagram of the THz 3D imaging system setup using transmission geometry (TIA: transimpedance amplifier). (b) Illustrations of one THz pulse without averaging in the time domain. (c) Illustrations of normalized THz power spectrum obtained through fast Fourier transform (FFT) operation from (b).
Fig. 3.
Fig. 3. Illustrations of the conventional techniques for THz 3D CT Imaging: (a) projection by Time-MAX; (b) sinogram; (c) cross section (slice) by IRT.
Fig. 4.
Fig. 4. Illustrations of Deer object: (a) conventional LR reconstruction with 218 slices where smooth SR reconstruction (e) by Willmore-based $\mathscr {W_{\varepsilon }}$ and (i) by Euler-Elastica-based model $\mathscr {E_{\varepsilon }}$; (b) input 109 slices with 1 gap where reconstruction (f) by $\mathscr {W_{\varepsilon }}$ and (j) by $\mathscr {E_{\varepsilon }}$; (c) input 55 slices with 3 gaps where reconstruction (g) by $\mathscr {W_{\varepsilon }}$ and (k) by $\mathscr {E_{\varepsilon }}$; (d) input 37 slices with 5 gaps where reconstruction (h) by $\mathscr {W_{\varepsilon }}$ and (l) by $\mathscr {E_{\varepsilon }}$.
Fig. 5.
Fig. 5. Illustrations of Polarbear object: (a) conventional LR reconstruction with 258 slices where smooth reconstruction (e) by Willmore-based $\mathscr {W_{\varepsilon }}$ and (i) by Euler-Elastica-based formulation $\mathscr {E_{\varepsilon }}$; (b) input 129 slices with 1 gap where reconstruction (f) by $\mathscr {W_{\varepsilon }}$ and (j) by $\mathscr {E_{\varepsilon }}$; (c) input 65 slices with 3 gaps where reconstruction (g) by $\mathscr {W_{\varepsilon }}$ and (k) by $\mathscr {E_{\varepsilon }}$; (d) input 43 slices with 5 gaps where reconstruction (h) by $\mathscr {W_{\varepsilon }}$ and (l) by $\mathscr {E_{\varepsilon }}$.
Fig. 6.
Fig. 6. Illustrations of Skull object: (a) conventional LR reconstruction with 138 slices where smooth SR reconstruction (e) by Willmore-based $\mathscr {W_{\varepsilon }}$ and (i) by Euler-Elastica-based formulation $\mathscr {E_{\varepsilon }}$; (b) input 69 slices with 1 gap where reconstruction (f) by $\mathscr {W_{\varepsilon }}$ and (j) by $\mathscr {E_{\varepsilon }}$; (c) input 35 slices with 3 gaps where reconstruction (g) by $\mathscr {W_{\varepsilon }}$ and (k) by $\mathscr {E_{\varepsilon }}$; (d) input 23 slices with 5 gaps where reconstruction (h) by $\mathscr {W_{\varepsilon }}$ and (l) by $\mathscr {E_{\varepsilon }}$.
Fig. 7.
Fig. 7. Illustrations of (a) the optical image of the unveiled peanut kernels. (b) projected Time-MAX THz 2D image. (c) reconstructed THz 3D image of peanuts from (b). (d) representative projection of unwrapped phase at 1.0010 THz. (e) 46 slices extracted from (d) by IRT. (f) reconstruction of peanut kernels by Willmore model $\mathscr {W}_{\varepsilon }$ from (e) with higher computational time (more iterations) and lower smoothness. (g) optimized reconstruction of kernels by Euler-Elastica model $\mathscr {E}_{\varepsilon }$ from (e) with lower computational time (less iterations) and higher smoothness.

Tables (2)

Tables Icon

Algorithm 1. Alternating Direction Method of Multipliers (ADMM)

Tables Icon

Table 1. Comparison metrics for three reconstructed objects Deer, Polarbear, and Skull with various input slices and gap(s) by Willmore-based W ε and Euler-Elastica-based E ε formulations: the standard deviation of Gaussian curvatures σ GC and of mean curvatures σ MC ; the average elapsed time of each iteration (seconds/iteration or s/iter); and the multi-scale structural similarity index measure (MS-SSIM) where ( ): higher (lower) is better.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

f ( x ) k ( R f ( t , θ k h ) ( x , n θ k ) )
arg min u m Ω ( ε 2 | u | 2 + 1 ε W ( u ) ) d Ω + n 2 ε Ω ( ε Δ u 1 ε W ( u ) ) 2 d Ω  s.t.  u i n u u e x .
u i n u u e x max ( min ( u , u e x ) , u i n ) .
L ρ ( u , w ; λ ) = Ω [ m ( ε 2 | w | 2 + 1 ε W ( u ) ) + n 2 ε ( ε div w 1 ε W ( u ) ) 2 + ρ 2 | u w + ρ 1 λ | 2 λ 2 2 ρ ] d Ω .
{ u k + 1 = arg min u Ω [ m ε W ( u ) n ε ( div w k ) W ( u ) + n 2 ε 3 ( W ( u ) ) 2 + ρ 2 | u w k + ρ 1 λ k | 2 ] d Ω w k + 1 = arg min w Ω [ m ε 2 | w | 2 + n ε 2 ( div w ) 2 n ε ( div w ) W ( u k + 1 ) + ρ 2 | w u k + 1 ρ 1 λ k | 2 ] d Ω λ k + 1 = λ k + ρ ( u k + 1 w k + 1 ) .
u k + 1 = ( I τ ρ Δ ) 1 [ u k + τ ( ρ w k + λ k m ε W ( u k ) + n ε div w k W ( u k ) + n ε 3 W ( u k ) W ( u k ) ) ] ,
w k + 1 = ( I + m ε τ n ε τ Δ ) 1 [ w k + n τ ε W ( u k + 1 ) Δ w k + τ ρ ( w k u k + 1 ρ 1 λ k ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.