Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast optimization of coded apertures in X-ray computed tomography

Open Access Open Access

Abstract

Coded aperture X-ray computed tomography (CAXCT) is a novel X-ray imaging system capable of reconstructing high quality images from a reduced set of measurements. Coded apertures are placed in front of the X-ray source in CAXCT so as to obtain patterned projections onto a detector array. Then, compressive sensing (CS) reconstruction algorithms are used to reconstruct the linear attenuation coefficients. The coded aperture is an important factor that influences the point spread function (PSF), which in turn determines the capability to sample the linear attenuation coefficients of the object. A coded aperture optimization approach was recently proposed based on the coherence of the system matrix; however, this algorithm is memory intensive and it is not able to optimize the coded apertures for large image sizes required in many applications. This paper introduces a significantly more efficient approach for coded aperture optimization that reduces the memory requirements and the execution time by orders of magnitude. The features are defined as the inner product of the vectors representing the geometric paths of the X-rays with the sparse basis representation of the object; therefore, the algorithm aims to find a subset of features that minimizes the information loss compared to the complete set of projections. This subset corresponds to the unblocking elements in the optimized coded apertures. The proposed approach solves the memory and runtime limitations of the previously proposed algorithm and provides a significant gain in the reconstruction image quality compared to that attained by random coded apertures in both simulated datasets and real datasets.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

X-ray computed tomography (CT) is a non-invasive imaging approach capable of reconstructing a three-dimensional object from compressive X-ray tomographic measurements acquired at many different view angles. As such, it is an imaging modality of critical importance in security surveillance [1], medical/biological imaging [2–4] and non-destructive testing [5,6]. The forward problem of CT can be modeled with an analytical formulation or an algebraic formulation. The former is usually solved via the filtered back-projection (FBP) algorithm due to its low memory demands and computational efficiency [7]. However, FBP requires numerous projections to obtain high quality reconstructed images [1,8,9], which leads to high X-ray radiation dose possibly damaging the specimen, for example deteriorating the body cells [10]. On the other hand, the algebraic formulation is conventionally solved using algebraic reconstruction techniques (ART) and sparsity-exploiting methods based on compressed sensing (CS) [11]. The CS-based approaches can improve the reconstructions from CT systems using a reduced number of X-ray paths such as systems with limited view angles [12,13]. Yet, subsampling in these approaches is usually uniform and it is difficult to estimate the sufficient-sampling conditions for real CT applications [14]. These approaches thus fall short in obtaining high-quality reconstructions as the number of measurements is further reduced. CAXCT was introduced to further reduce the radiation dose and simultaneously improve the reconstruction performance by placing binary coded apertures in front of the X-ray source [15–18]. Different from uniform sampling in the algebraic formulation of conventional CT, CAXCT samples the object randomly. The reconstruction in CAXCT is however an ill-posed inverse problem, and thus CS is used to reconstruct the images.

Coded apertures placed on front of the source can typically improve the reconstruction quality of the CT system, where the coherence of the sensing matrix is reduced by the structured illumination attained by the coded apertures [15]. Kaganovsky et al. analyzed four compressive sampling strategies in CAXCT, (I) uniform detector subsampling, (II) uniform view subsampling, (III) random detector subsampling and (IV) random view subsampling, and demonstrated that the random detector subsampling strategy exhibited the best performance [19]. More precisely, the only varying component in CAXCT, besides the number of angles used to project the X-rays, are the patterns used in the coded apertures which determine the PSF of the system, the radiation dose, and the projections, which in turn determines the attainable image quality. The coded apertures used in CAXCT in [15–17,19] are binary (blocking and unblocking elements) and random for all view angles; thus, the structure matrix is not considered. Cuadros et al. first proposed a gradient descent optimization approach based on the PSF of the system and the restricted isometry property (RIP) principle of the compressive projections to optimize the coded apertures [20]. The peak signal to noise ratios (PSNR) of the reconstructed images with optimized coded apertures exhibited significant gains (∼3.8dB for an image with 64 × 64 pixels) over those with random coded apertures. However, the memory requirements and computational complexity of the calculation of the Gram matrix limited the optimization to only small image sizes, even with parallel computation.

This paper introduces a memory efficient and fast algorithm to optimize the coded apertures in CAXCT based on the minimum information loss defined as the minimum correlation of the X-ray geometry onto the sparse basis. Furthermore, in order to optimize the coded apertures for high-resolution images and detectors, division strategies and multi-stage optimization are introduced to minimize the proposed cost function given the hardware limitations when dealing with large data cubes. Finally, the X-ray fan-beam CT architecture is used to demonstrate the performance of the optimized coded apertures. Note that the proposed optimization can be easily extended to other geometries as well as other optical systems, such as compressive X-ray tomosynthesis [18,21], spectral CT [22], and coded aperture snapshot spectral imaging (CASSI) [23–25].

2. Forward model

The optical setup of CAXCT is illustrated in Fig. 1. A coded aperture pattern is placed in front of the fan beam X-ray source to modulate the radiation onto the flat detector [19]. As shown, the pixels on the coded aperture have one-to-one correspondence to the pixels on the flat detector. A generalized mapping when a coded aperture pixel impinges on several detector elements is also possible and the method described in this paper can be easily adapted to such cases. The location of the X-ray source and the corresponding detector vary in a circular trajectory and different coded aperture patterns are placed at different view angles. The measurements are described using the Beer-Lambert law [1] as

y(θ,η)=α(θ,η)I(θ,η,E)exp(μ(x,y,E)dxdy)dE,
where μ(x, y, E) represents the linear attenuation coefficient of the object at the Cartesian coordinates x and y at the X-ray energy E, and α(θ, η) and I(θ, η, E) represent the coefficients of the coded apertures and the intensity of the X-ray source at the view angle θ, direction η and X-ray energy E, respectively. Due to the pixelated nature of the detector array, the continuous model is sampled discretely. Thus, for a discretized object of N × N pixels denoted by f ∈ ℜN2×1, the measurements acquired by a flat detector with M pixels at P view angles are given by the following discrete-to-discrete model
y=CWf,
where C ∈ ℜMP×MP is a diagonal matrix whose diagonal elements represent the coded apertures at each view, and W ∈ ℜMP×N2 is referred to as the structure matrix. Each row of W represents the intersection of each X-ray path with all the pixels of the object. The pixels on the coded apertures have two possible values, “0” and “1”, which represent blocking elements and unblocking elements, respectively. Thus, the effect of the coded aperture can be seen as selecting or deleting rows of the structure matrix W. An example of the imaging process using random coded apertures, with transmittance 50% for M = P = 2N = 8 and D = 32 is depicted in Fig. 2, where D is both, the number of unblocking elements on the coded apertures, and also the total number of the unblocked measurements.

 figure: Fig. 1

Fig. 1 Optical setup of coded aperture X-ray CT. (a) The object is illuminated by coded fan-beam X-ray sources at P positions [S1, S2, · · · , SP], and the projections are captured by a flat detector. Part of the X-ray radiation is blocked by the blocking elements on the coded apertures and the correspondent pixels on the flat detector are discarded. (b) CA represents coded aperture, and the white and black squares represent unblocking and blocking elements, respectively.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 The structure matrix W, the coded aperture matrix C, the vectorized object f = Ψz, where Ψ is the sparse basis matrix and z is the vectorized sparse representation, and the vectorized measurements y for a fan beam CAXCT system with P = 8 view angles, M = 8 detectors per view angle and a N × N = 4 × 4 image. The CT system matrix is H = . The coded apertures have 50% transmittance, that is, the number of unblocking elements on the coded apertures D = 32.

Download Full Size | PDF

Radiation dose reduction is determined by the number of unblocking elements D on the coded apertures. The subsampling rate of CAXCT is calculated as D/N2, and the transmittance of the coded apertures is defined as D/MP. Therefore, reducing radiation and maintaining an adequate high sampling rate for high quality image reconstruction are conflicting goals based on the Shannon-Nyquist sampling theorem. CS principles, however, can be used to drastically reduce the number of measurements without loss of reconstruction fidelity [26–28]. The ill-posed set of equations in Eq. (2) can be accurately solved as long as f is sufficiently sparse in some basis Ψ ∈ ℜN2×N2 and such basis is incoherent with the measurement matrix CW. The imaging process can be re-written as y = CWΨz = Az, where A=CWΨ is the sensing matrix in CS and z is the sparse representation. The sparse representation of f can be obtained by solving the following nonlinear optimization problem

z^=argminzyAz22+λz1,
where λ is the regularization constant, and ‖·‖1 and ‖·‖2 represent the 1 and 2 norms, respectively. A number of algorithms have been developed recently to solve the inverse in Eq. (3) [26–30].

3. Coded aperture optimization

The CT system matrix can be calculated as H = where W is the structure matrix given by the hardware settings and Ψ is the sparse representation of the object. Note that the mth row of the CT system matrix H, where m = 1, 2, · · · , MP, corresponds to the inner product of the mth row of the structure matrix and the columns of the sparse basis. That is the mth row of the system matrix H represents the inner product of a particular X-ray path onto the basis representation. When a coded aperture is placed in front of the X-ray source, the coded aperture elements select the rows of the CT system matrix H associated with the pixels of the detector that correspond to the unblocking coded aperture elements and delete the rows corresponding to the blocking elements. The sensing matrix is given by

A=CH,
where C is a binary sparse diagonal matrix. A in Eq. (4) is formed by the rows of the matrix H that correspond to the detectors selected by the non-zero entries of the matrix C. A possible approach to arrange the blocking and unblocking elements of the coded apertures is to distribute them randomly as proposed in initial models of coded aperture compressive X-ray CT [19]. The sensing matrix of the system in Eq. (4), however, is highly structured as depicted in Fig. 2. As such, random coded apertures are not optimal [20, 23–25].

In this paper, a coded aperture optimization framework based on feature selection techniques is proposed. Feature selection is a procedure used in data science that selects a subset of relevant features for use in model reconstruction [31]. That is, features from a dataset are selected based on a metric such that features with redundant information are removed with the least information loss. Several metrics are usually used in feature selection, such as mutual information, inter/intra class distance, and the scores of significance tests for each class/feature combination [32,33]. A useful criterion for feature selection that can be applied to the coded aperture optimization is to reduce the correlation between the subset of selected features [34]. In the forward model of CAXCT system, the CT system matrix H can be considered as a large set of variables where each row represents the inner product of a particular X-ray path onto the sparse basis representation and each column is associated with a particular voxel. Therefore, in our coded aperture optimization approach, the dataset corresponds to the complete set of columns of the matrix H and the features correspond to the rows of the matrix, that is, the inner products of the X-ray paths onto the basis representation. Using the minimum correlation between these inner products as the minimum information loss metric, the cost function for the coded aperture optimization is defined as

J=AAT22=CH(CH)T22=CHHTC22=N2CQC22=N2i=1MPj=1MP(CiiQijCjj)2.
where Q is the correlation matrix of HT. Given that images reconstructed from CS are based on the fluctuations of the measurements, the problem can be formulated using a standardized matrix H. Thus, the correlation matrix Q is a symmetric matrix and Qii = 1. Furthermore, as defined in the previous section, Cii = 1 for unblocking elements and Cii = 0 for blocking elements, then Eq. (5) becomes
argminJ=argminij=Qij2SubjecttoCii=1andCjj=1i,j
where Qij is the Pearson product-moment correlation coefficient (PCC) of the ith and jth row of the CT system matrix, H.

The proposed cost approach is different from the cost function of the gradient approach in [20]. The cost function in [20] is defined as the l2 norm of the Gram matrix which corresponds to ‖ATA2 and is related to the coherence of the sensing matrix A and the RIP principle. This cost function is accurate but requires extreme computational resources. On the other hand, the cost function proposed in the present manuscript is defined as the squared Pearson correlation coefficients (PCC) matrix of the coded aperture CT system matrix, that is AAT22 which is related to the minimum information loss principle.

Additionally, the correlation of the aforementioned inner products can be used to theoretically explain why the sampling of CAXCT using random coded apertures performs better than the angle subsampling without coded apertures. As shown in the Appendix A, the set of X-ray paths with more intersections leads to smaller correlation coefficients. In the CAXCT system, the X-ray paths selected with the coded apertures have more intersections compared to angle subsampling, and thus CAXCT with random coded apertures performs better than the conventional limited angle tomography with the same number of X-ray paths [35]. The average number of ray intersections of D random X-ray paths is D2+3D+44 and the maximum number of the intersections is D2+D+22 [35]. Thus, random coded apertures are not optimal. However, it should be noted that it is difficult to reach the upper bound, D2+D+22, because the X-ray paths for optimization is a subset of the all possible X-ray paths at any view angles.

Tables Icon

Algorithm 1. Local optimization of coded apertures

Combinatorial approaches to solve the optimization problem formulated in Eq. (6) are NP hard [36]. Moreover, iterative approaches, such as the direct binary search (DBS) and gradient descent, which have been used in the optimization of coded apertures [18,20], have limited applicability due to their computational complexity. Thus, a fast, simple and locally optimal algorithm is proposed to minimize the cost function. This locally optimal approach aims at searching the set of X-ray paths which minimize Eq. (6) as detailed in Algorithm 1. This local search may not reach the global optimum, but it is able to optimize the coded aperture in reasonable time and reduce the memory requirements. Note that the sum of the squared PCC of all the X-ray paths can be directly calculated by the sum of the corresponding row in the squared PCC matrix Q2 of the matrix, HT. Thus, the PCC matrix Q of the transpose of the CT system matrix, HT is calculated in step 1. As depicted in step 2step 4, the algorithm starts by selecting the column with the minimum sum of the squared PCC, as the first unblocking element of the coded apertures. That is, the sum of all columns in the matrix, Q2, is calculated and the element of the coded apertures that corresponds to the index with the minimum value in the sum is set to 1. Then, as shown in step 5step 7, the column with the minimum sum of squared PCC within the selected columns of the CT system matrix HT, is selected as the next unblocking element. That is, the sum of the columns corresponding to the selected X-ray paths in the squared PCC matrix Q2 is calculated and the element of the coded apertures that corresponds to the index with the minimum value in the sum is set to 1. This process, step 5step 7, is repeated until the number of unblocking elements reaches D. Compared with the gradient approach proposed in [20], which calculates the gradient of the Frobenius norm of the difference between the Gram matrix and identity matrix after each update, the PCC matrix generated in the proposed fast algorithm is stored in memory for further numerical calculations. Note the vector “codes” in Algorithm 1 corresponds to the diagonal of the coded aperture matrix C.

Algorithm 1 performs well for N up to 128; however, for larger CT system matrices a different approach is necessary given the memory requirements. For instance, the dimensions of the CT system matrix for N = 256 are (4 × 256 × 256) × (256 × 256), that is, ∼512G of memory are necessary to calculate the PCC matrix. Therefore, a method to divide the CT matrix into multiple sub-matrices is proposed. The sub-matrices of the CT system matrix can be obtained in three ways: (i) uniform division: the rows are selected with the same steps; (ii) random division: the rows are selected randomly and (iii) separated division: the rows are selected sequentially. The random, separated and uniform division mechanisms are depicted in Figs. 3(a)–3(c), respectively. The separated sub-matrices are independently optimized using Algorithm 1 to obtain D/L selected X-ray paths in each sub-matrix, where L is the number of sub-matrices obtained from H. Finally, the optimal sub-matrices are combined to obtain the optimal matrix. Note that this division is used because the memory of the simulation platform used is not sufficient to calculate the PCC matrix of HT for 256 × 256 images or larger.

 figure: Fig. 3

Fig. 3 Division of CT system matrix H with 12 rows into 3 submatrices, S1, S2 and S3. (a) H is divided randomly. The 1st, 4th, 7th and 10th rows are combined in S1, the 3rd, 5th, 6th and 9th rows are combined in S2, and the 2nd, 8th, 11th and 12th rows are combined in S3. (b) H is divided such thay 1st, 2nd, 3rd and 4th rows are combined in S1, the 5nd, 6th, 7th and 8th rows are combined in S2, and the 9th, 10th, 11th and 12th rows are combined in S3. (c) H is divided uniformly. The 1st, 4th, 7th and 10th rows are combined in S1, the 2nd, 5th, 8th and 11th rows are combined in S2, and the 3rd, 6th, 9th and 12th rows are combined in S3.

Download Full Size | PDF

The aforementioned approach, however, falls short when the number of selected X-ray paths are large given that the heuristic search algorithm ranks alternatives at each branching step based on the information loss. Furthermore, the correlations of the inner products of the X-ray paths onto the sparse basis representation in different sub-matrices are not considered. Thus, a “two-stage” search algorithm based on the division method is proposed to improve the performance of the coded aperture optimization for large image sizes. The first stage of the algorithm consists in dividing the CT system matrix using the strategies described in Fig. 3; then, Algorithm 1 is used to obtain L optimized sub-matrices Gm and finally, we define a new matrix B, formed with the optimized sub-matrices Gm, m = 1, 2, . . ., L , such that B = [G1 | . . . |GL]T. Note that the number of the selected paths D′ is larger than D/L for each sub-matrix in the division method in order to take into account the correlation between the separated parts of the original CT system matrix. The second stage of the algorithm consists in using Algorithm 1 on the matrix B to obtain the desired number of X-ray paths D from D′L paths selected in the first stage. The size of D′ which in turn determines the size of the sub-matrix Gm, is determined by the computational platform used for the optimization. It should be noted that a larger number of rows leads to coded apertures with a better performance at the cost of large memory requirements. However, the proposed two-stage algorithm locally considers the correlations of the inner products of the X-ray paths onto the sparse basis representation in the divisions of the original CT system matrix. Note that the number of rows in each optimized sub-matrix can be larger and this would yield better results; however, the execution time would increase.

To illustrate the algorithm consider the coded aperture optimization for N = 256 at 50% sub-sampling rate with a CT system matrix H ∈ ℜ4×2562×2562. Using the two-stage algorithm, H is first divided into L = 4 sub-matrices Hm where m = 1, 2, 3, 4 and then, using Algorithm 1 on each sub-matrix, four optimal submatrices Gm ∈ ℜ1282×2562, m = 1, 2, 3, 4, are obtained. Finally, Algorithm 1 is applied to the new matrix B formed by concatenating the optimal sub-matrices Gm to obtain the optimal 32768 (50% of 2562) X-ray paths. It should be noted that this approach improves the optimization for large images at the cost of doubling the runtime.

4. Computer simulations

To further study the proposed optimization approach in CAXCT, computer simulations for a fan beam X-ray source and a 128 × 1 -pixel flat imager are performed. The optimized and random coded apertures are simulated in front of the X-ray source at 128 view angles, and the 64 × 64 pixels “Walnut Phantom” [37] is used as the object f. The geometric length of the flat detector is ∼ 40cm, and the source-to-object and source-to-detector distances are 40cm and 80cm, respectively. That is, N = 64, and P = M = 128. The ASTRA Tomography Toolbox [38] is used to calculate the discrete-to-discrete structure matrix W and the corresponding measurements y. In our optimization, the 2D Haar wavelet is used as sparse basis as it performs well in 2D image compression [39,40], and the gradient projection for sparse reconstruction (GPSR) algorithm with 1000 iterations is used to reconstruct the images for both random and optimized coded apertures [29]. To evaluate the reconstructed images quantitatively, the peak signal-to-noise ratio (PSNR) is used as

PSNR=20log10(fmaxMSE),
where MSE=(ff^)2N2, and fmax denote the reconstructed image and its maximum value, respectively.

The runtime for MATLAB to optimize the coded apertures with 50% and 75% subsampling rate (12.5% and 18.75% transmittance) is ∼32 seconds and ∼27 seconds, respectively, on a Lenovo Ideapad Laptop (2.5GHz Intel Core i5 and 8G Memory). The runtime for MATLAB to calculate the PCC matrix is ∼18 seconds. On the other hand, the runtime for MATLAB using the gradient descent approach is ∼12–24 hours on a server with 20 cores and 64G memory using a parallel implementation of the algorithm proposed in [20]. The runtime is thus decreased by three orders of magnitude. Figs. 4(b)–4(d) depict the reconstructions of the “Walnut phantom” for a 20.3% transmittance using the proposed algorithm, the gradient descent approach proposed in [20] and random coded apertures. The PSNR of each reconstruction is shown in the corresponding figure. In the random coded apertures case, the PSNR is the average of 10 different realizations. Additionally, Figs. 4(e)–4(g) depict the normalized absolute errors for the three cases respectively; as it can be seen, random coded apertures lead to more artifacts in the reconstructions compared to the optimized coded apertures. The PSNR gain of the proposed approach and the gradient descent approach are 3.5dB and 3.8dB, respectively.

 figure: Fig. 4

Fig. 4 (a) “Walnut phantom” and reconstructed images using (b) proposed approach, (c) gradient descent approach [20] and (d) random coded apertures. Absolute error images for (e) proposed approach, (f) gradient descent approach and (g) random coded apertures. Note that more artifacts are present in the reconstructions from random X-ray paths.

Download Full Size | PDF

The singular value decomposition (SVD) analysis of the CT system matrix, which has been reportedly used as a tool to compare the performance of coded apertures, is depicted in Fig. 5 [19]. The measurement strategy with larger non-zero singular value components is considered to capture more orthogonal components of the object and thus, it leads to less ill-conditioned reconstruction. The SVD of the sensing matrix A=CWΨ with optimized coded apertures is calculated and 3326 nonzero singular values are obtained. The singular values of the sensing matrix with random coded apertures correspond to the average of 10 different selections. It can be seen that the SVD of the proposed approach and the gradient descent approach are similar, and both of them are higher than the SVD of random coded apertures. Based on the PSNR gain of the reconstructed images and the SVD analysis, the performance of the proposed optimization approach is similar to the gradient approach using the same simulated dataset in [20]. The proposed approach has better performance compared to random coded apertures.

 figure: Fig. 5

Fig. 5 SVD of the sensing matrix using random and optimal coded apertures. The highest and lowest singular values are highlighted in each case.

Download Full Size | PDF

Additional simulations using the 128 × 128 “Walnut phantom” and a flat detector with 256 pixels are performed. The object is scanned at 256 view angles with a fan beam X-ray source, and the GPSR algorithm is used with 250 iterations. The runtime for MATLAB to optimize the coded apertures at 75%, 50%, 25%, 12.5% and 6.25% subsampling rate corresponding to 18.75%, 12.5%, 6.25%, 3.12%, 1.56% transmittance are 549 seconds, 495 seconds, 440 seconds, 412 seconds and 397 seconds on a Dell desktop (3.6GHz Intel Core i7 and 64G Memory), respectively. The runtime for MATLAB to calculate the PCC matrix is ∼525 seconds. And the runtime for MATLAB using the gradient descent approach is ∼3 days on a server with 20 cores and 256G memory based on parallel implementation of the algorithm. Note in this scenario, the runtime of the optimization is also reduced by three orders of magnitude compared to the approach in [20]. The reconstructed images using optimized coded apertures at 25%, 50% and 75% subsampling rates, are depicted in Figs. 6(a)–6(c) and Figs. 6(d)–6(f) depict the reconstructed images when using random coded apertures for the same subsampling rates. It is seen that the details (edges) in the reconstruction using optimized codes are much clearer than those using random codes. The artifacts in the background are also reduced when the proposed optimal coded apertures are used. Additionally, the PSNR of the reconstruction using optimized codes at 25%, 50% and 75% subsampling rate are 24.37dB, 29.50dB and 33.83dB, respectively and those using random codes at 25%, 50% and 75% subsampling rate are 21.48dB, 26.23dB and 29.90dB, respectively. Thus, as in the previous simulation scenario, the average PSNR gain is 3.37dB. The analysis of SVD at different subsampling rates, depicted in Fig. 7, also shows the improvement with optimal codes.

 figure: Fig. 6

Fig. 6 The reconstructed 128 × 128 image of the “Walnut phantom” with optimized coded apertures at (a) 25%, (b) 50% and (c) 75% subsampling rate; with random coded aperture at (d) 25%, (e) 50% and (f) 75% subsampling rate.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 The plots of singular values as a function of component numbers at 25%, 50% and 75% subsampling rate for optimized and random coded apertures of 128 × 128 images, respectively. The highest and lowest singular values are highlighted in each case.

Download Full Size | PDF

A third simulation scenario for the 256×256 “Walnut phantom” is analyzed. 512 view angles and a flat detector with 512 pixels are used. The approach in [20] cannot be used to optimize the coded apertures due to the high cost of runtime and memory. Furthermore, as mentioned in Section 3, the PCC matrix cannot be calculated for N > 128 due to hardware limitations. Thus, in order to optimize the coded apertures, the CT system matrix is divided into four submatrices, Hm ∈ ℜ(256×256)×(256×256), m = 1, 2, 3, 4 using the strategies described in the previous section. 50% subsampling rate is a good representation for coded aperture computational imaging [41]. The average PSNR of uniform, random and separated division at 50% subsampling rate are calculated as 29.15dB, 29.06dB and 30.31dB, respectively. The PSNR for random coded apertures is 28.03dB. The strategy referred to as “separated division” yields the best performance. Thus, simulations for different sub-sampling rates are performed using this strategy to compare the reconstructions obtained using optimized and random coded apertures for N = 256.

Then, two-stage optimization is performed based on the results of “separated division” with Hm ∈ ℜ(256×256)×(256×256), m = 1, 2, 3, 4 and L = 4. Algorithm 1 is used to obtain four optimized sub-matrices Gm ∈ ℜ2D/L×2562. The new matrix B ∈ ℜ2D×2562 formed with the sub-optimized matrices are performed using Algorithm 1 and finally the D×(2562) optimized matrix is obtained. The results are depicted in Figs. 8(a)–8(i) for 25%, 50% and 75% subsampling rates. The runtime for two-stage optimization and “separated division” optimization are ∼2 hours and ∼1 hour, respectively. The PSNR of the reconstructions using two-stage optimized codes, “separated division” codes and random codes at 25%, 50% and 75% subsampling rate are 26.46dB, 31.12dB and 34.08dB; 25.72dB, 30.31dB and 33.77dB; and 23.50dB, 28.03dB and 31.05dB, respectively. The division operation reduces the PSNR gain from 3.4dB to 2.4dB compared with the results obtained using Algorithm 1 for N = 64 and N = 128. The two-stage optimization improves the performance of division operation and thus the PSNR gain at 25% and 50% subsampling rate are 2.9dB and 3.1dB, respectively. The PSNR gain using two-stage optimization is not significant (∼ 0.3dB) at 75% subsampling rate because the subsampling rate is large for the search range of the optimization in the second step. The obtained optimal coded apertures still perform better than random coded apertures and present less artifacts in the reconstructions. Furthermore, the division strategies and two-stage optimization provide an alternative to perform the coded aperture optimization for large image sizes.

 figure: Fig. 8

Fig. 8 The reconstructed 256×256 image of the “Walnut phantom” with two-stage optimized coded apertures at (a) 25%, (b) 50% and (c) 75% subsampling rate; “separated division” optimized coded apertures at (d) 25%, (e) 50% and (f) 75% subsampling rate; with random coded aperture at (g) 25%, (h) 50% and (i) 75% subsampling rate.

Download Full Size | PDF

5. Optimization using real X-ray tomography projections

In the previous simulations, the measurements y were generated using the “Walnut phantom” f at different resolutions and the structure matrix W obtained from the ASTRA toolbox such that y = CWf. In this section, real projections from a slice of lotus root available in [42] are used as the measurements y. Even though, the experiments to obtain the dataset were performed without coded apertures, the projections of CAXCT can be obtained by deleting the measurements corresponding to the blocking elements on the coded apertures. The distances between the source and detector, and that between the source and center of rotation are 63cm and 54cm, respectively. The sinogram and the structure matrix corresponding to 120 view angles and 429 pixels on the flat detector are also provided. The sinogram and the structure matrix corresponding to 120 view angles and 429 pixels on the flat detector are available in [42]. It should be noted that 120 view angles corresponds to a limited angle scenario, thus the reconstructions obtained with these measurements will contain several streak-artifacts. Figures 9(a) and 9(b) depict the reconstructions obtained using all the available measurements and using GPSR and FBP, respectively. It can be seen that the streak artifacts are more noticeable in the FBP reconstruction than in the GPSR reconstruction; however, there are still some artifacts present even when the GPSR algorithm is used. These artifacts will also appear when using coded apertures since the number of X-ray paths will be further reduced. Underflow in the numerical calculations of PCC is avoided by letting the corresponding elements on the coded apertures blocked because the corresponding measurements provide little information of the object. The runtime for the optimization of 128 × 128 images is ∼12 minutes. The reconstructions of the Lotus root with 128 × 128 pixels at 50% subsampling rate (15.9% transmittance of all 51480 X-ray paths) for optimized and random coded apertures are depicted in Figs. 10(a) and 10(b), respectively. The result of the conventional CT with the same number of X-ray paths without coded apertures is shown in Fig. 10(c). It is seen that random coded apertures improve on the conventional CT reconstruction, but optimized coded apertures improve the random ones as well.

 figure: Fig. 9

Fig. 9 256 × 256 Lotus root reconstructed images with full available projections using (a) GPSR and (b) FPB algorithm.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 128 × 128 Lotus root reconstructed images using (a) optimized coded apertures, (b) random coded apertures and (c) conventional CT where no coded apertures are used. All reconstructions shown in this figure use the same number of X-ray measurements, 50% subsampling rate.

Download Full Size | PDF

And the reconstructions of the Lotus root with 256 × 256 pixels at 50% subsampling rate (63.7% transmittance of all 51480 X-ray paths) are depicted in Figs. 11(a)–11(c). The runtime for the optimization of 256 × 256 images is ∼25 minutes. It is noted that “separated division” and “two-stage” optimization are not necessary in this case because the number of X-ray paths is still 51480 for the 256 × 256 images that the PCC matrix can be calculated and stored. The experiments of “two-stage” optimization are not performed in this case because the number of the X-ray paths available is very small [38]. However, the results of “separated division” are also provided to experimentally demonstrate that this approach can improve on the random coded apertures at a small cost in the reconstruction quality. It is seen that the artifacts of the lotus root are significantly reduced with optimized coded apertures. Additionally, Figs. 11(d)–11(f) depict the normalized absolute errors for the three cases respectively; as it can be seen, random coded apertures lead to more artifacts in the reconstructions compared to the optimized coded apertures. However, it should be noted that the gain of the reconstruction quality of the image with 256 × 256 pixels is much less than that for the image with 128 × 128 pixels. There are two reasons: (1) Only 50% (L = 2) of the PCC matrix is used in the reconstruction of Fig. 11(b) due to the use of separated division. (2) The experiments were performed in a limited view angle setting, thus only 51480 paths were available. The optimization for the image with 128 × 128 pixels searches the combination of 8192 X-ray paths from 51480 X-ray paths, and that for the image with 256 × 256 pixels searches the combination of 32786 X-ray paths from 51480 X-ray paths.

 figure: Fig. 11

Fig. 11 256 × 256 Lotus root reconstructed images using (a) random coded apertures, (b) optimized coded apertures with “separated division” and (c) optimized coded apertures without “separated division”. Absolute error images for (d) random coded apertures, (e) optimized coded apertures with “separated division” and (f) optimized coded apertures without “separated division”. Note that more artifacts are present in the reconstructions from random X-ray paths.

Download Full Size | PDF

Results compared with the approach in [20] for different objects are summarized in Table 1. Note that the standard image of the slice of lotus root was calculated using Landweber iterations without coded apertures. Walnut1 and Walnut2 represent the optimization using separated division and two-stage approach, respectively. It is seen that the proposed approach significantly reduces the runtime at a small cost of reconstruction quality of the images with 32×32, 64×64, 128×128 pixels. Furthermore, parameter tuning is not necessary in the proposed approach and the number of X-ray paths is accurately achieved. Additionally, the proposed approach optimizes the coded apertures to provide over 3dB gains compared to random coded apertures. The reconstruction quality of the “Walnut phantom” with 128 × 128 pixels of optimized codes at 50% subsampling rate (29.50dB) is similar to that of random codes (29.90dB) at 75% subsampling rate, and the X-ray paths are reduced up to 33.3%.

Tables Icon

Table 1. PSNR and runtime for the proposed approach and the gradient descent approach

6. Conclusion

The optimization of coded apertures based on minimum information loss is demonstrated theoretically and experimentally. A fast and local searching algorithm is presented to obtain the optimized codes with orders of magnitude faster computation than state-of-the-art optimization algorithms. PSNR of the reconstructed imagery and the SVD analysis of the singular values of the optimized sensing matrices are used in the analysis of performance of the proposed code aperture optimization algorithms. A fast and less memory demanding coded aperture optimization method is also introduced which introduces a small cost in the reconstruction quality but provides significant gains in computation requirements. The presented approach is general and can be extended to other geometries of CAXCT, and to other coded aperture imaging system such as coded aperture spectral imaging architectures.

Appendix A:

In the discrete-to-discrete model of a fan-beam CT system, the pixels of the object can be considered as dots and the paths of the X-ray source can be considered as lines through the object. Thus, the non-zero values in the sparse structure matrix, W, are one values and the number of non-zero values in the sparse measurements can be seen as the length of the corresponding line (X-ray path) in the object. Let the tth X-ray path be xt where xt is an a-sparse binary vector and the (t + 1)th X-ray path be xt+1 where xt+1 is a b-sparse binary vector. Note that a and b represent the number of the dots that the corresponding rays pass through. Two situations of the X-ray paths are considered as follows:

  1. The (t + 1)th X-ray path and tth X-ray path don’t intersect at any pixel in the object. Therefore, xtxt+1 is equal to a zero vector.
  2. The (t + 1)th X-ray path and tth X-ray path intersect at a pixel in the object. Therefore, xtxt+1 is equal to a 1-sparse vector and the element at the intersection is one.

The PCC of the X-ray paths, as defined in [43], of the situations (1) and (2) above are given by

corrcoef1=0(aN2)(bN2)(aN2)(aN2)2(bN2)(bN2)2,
and
corrcoef2=1N2(aN2)(bN2)(aN2)(aN2)2(bN2)(bN2)2.
Then, the difference of the squared PCC is calculated as
Δ=(corrcoef2)2(corrcoef1)2=N22abN2((aN2)(aN2)2)((bN2)(bN2)2).

 figure: Fig. 12

Fig. 12 The geometric structures of the moments (a), (b) and (c). The red lines, blue lines and green lines represent the X-ray paths in the object, the distances between the origin and the X-ray paths and the critical length, 22N, respectively.

Download Full Size | PDF

In the example of a fan-beam CT system where the geometric lengths of the flat detector and the object are N, the source-to-object and source-to-detector distances are N and 2N, respectively. Given that Δ < 0 exists when the geometric length of the X-ray path in the object is larger than 22N, two critical moments, depicted in Figs. 12(a) and 12(b), are considered. The moment (a) is defined that the edge of the fan beam X-ray path (red line) is projected through the position A4 and the moment (b) is defined that the edge of the fan beam X-ray path (red line) is projected through the middle position of the line A1A2¯. The geometric lengths of the X-ray paths between the moment (a) and (b) are larger than 22N because the distances between the X-ray paths and origin are less than 24N. And the geometric lengths of the X-ray paths beyond the moment (a) and (b), depicted in Fig. 12(c), can be calculated as the square root of the sum of squared lengths of the orange lines, which is larger than (12A1A2¯)2+(12A1A4¯)2=22N. Therefore, the geometric lengths of all X-ray paths are larger than 22N and then Δ < 0.

Funding

National Natural Science Foundation of China (61271332); the Seventh Six-talent Peak Project of Jiangsu Province (2014-DZXX-007); National Science Foundation (NSF) (CIF 1717578); Fulbright-Finland Foundation under the 2017 Fulbright-Nokia Distinguished Chair award; China Scholarship Council (201706840090); Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX17_0341).

References and links

1. S. J. Kisner, “Image Reconstruction for X-ray Computed Tomography in Security Screening Applications,” Purdue University (2013).

2. K. O. Khadivi, “Computed tomography: fundamentals, system technology, image quality, applications,” Medical Physics 33(8), 3076 (2016). [CrossRef]  

3. A. Momose, T. Takeda, Y. Itai, and K. Hirano, “Phase-contrast X-ray computed tomography for observing biological soft tissues,” Nature Medicine 2(4), 473–475 (1996). [CrossRef]   [PubMed]  

4. J. Hsieh, Computed tomography: principles, Design, Artifacts, and Recent Advances (SPIE2015).

5. R. Stempert and D. Boye, “Volumetric Radiography of Watermarks,” http://digitome.davidson.edu/wp-content/uploads/2018/01/Volumetric-Radiography-of-Watermarks.pdf.

6. D. Boye, R. Garner, and R. Kozlowski, “Examining paintings on wood or canvas using 3D X-ray imaging with Digitome,” http://digitome.davidson.edu/wp-content/uploads/2018/01/Examining-paintings-using-3DX-ray.pdf.

7. F. Natterer, “Inversion of the attenuated Radon transform,” Inverse problems 17(1), 113 (2001). [CrossRef]  

8. K. H. Tuy, “An inversion formula for cone-beam reconstruction,” SIAM Journal on Applied Mathematics 43(3), 546–552 (1983). [CrossRef]  

9. B. D. Smith, “Image reconstruction from cone-beam projections: necessary and sufficient conditions and reconstruction methods,” IEEE Trans. Medical Imaging 4(1), 14–25 (1985). [CrossRef]   [PubMed]  

10. R. S. Bindman, J. Lipson, R. Marcus, K. P. Kim, M. Mahesh, R. Gould, A. B. De Gonzalez, and D. L. Miglioretti, “Radiation dose associated with common computed tomography examinations and the associated lifetime attributable risk of cancer,” Archives of internal medicine 169(22), 2078–2086 (2009) [CrossRef]  

11. X. Pan, E.Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse problems 25(12), 123009 (2009). [CrossRef]  

12. K. Kouris, H. Tuy, A. Lent, G. T. Herman, and R. M. Lewitt, “Reconstruction from sparsely sampled data by art with interpolated rays,” IEEE Trans. Medical Imaging 1(3), 161–167 (1982). [CrossRef]   [PubMed]  

13. K. C. Tam and Victor Perez-Mendez, “Tomographical imaging with limited-angle input,” J. Opt. Soc. Am. A 71(5), 582–592 (1981). [CrossRef]  

14. J. S. Jorgensen, E. Y. Sidky, and X. Pan, “Quantifying admissible undersampling for sparsity-exploiting iterative image reconstruction in X-ray CT,” IEEE Trans. Medical Imaging 32(2), 460–473 (2013). [CrossRef]  

15. K. Choi and D. J. Brady, “Coded aperture computed tomography,” Proc. SPIE 7468, 74680B (2009).

16. M. Hassan, J. A. Greenberg, and D. J. Brady, “Snapshot fan beam coded aperture coherent scatter tomography,” Opt. Express 24(16), 18277–18289 (2016). [CrossRef]   [PubMed]  

17. J. Greenberg, K. Krishnamurthy, and D. Brady, “Compressive single-pixel snapshot x-ray diffraction imaging,” Opt. Lett. 39(1), 111–114 (2014). [CrossRef]  

18. A. Cuadros, C. Peitsch, H. Arguello, and G. R. Arce, “Coded aperture optimization for compressive X-ray tomosynthesis,” Opt. Express 23(25), 32788–32802 (2015). [CrossRef]   [PubMed]  

19. Y. Kaganovsky, D. Li, A. Holmgren, H. Jeon, K. P. MacCabe, D. G. Politte, J. A. O’Sullivan, L. Carin, and D. J. Brady, “Compressed sampling strategies for tomography,” J. Opt. Soc. Am. A 31(7), 1369–1394 (2014). [CrossRef]  

20. A. Cuadros and G. R. Arce, “Coded aperture optimization in compressive X-ray tomography: a gradient descent approach,” Opt. Express 25(20), 23833–23849 (2017). [CrossRef]   [PubMed]  

21. A. P. Cuadros, K. Wang, C. Peitsch, H. Arguello, and G. R. Arce, “Coded aperture design for compressive X-ray tomosynthesis,” in Imaging and Applied Optics 2015, OSA Technical Digest, Optical Society of America, paper CW2F.2.

22. A. Cuadros and G. R. Arce, “Coded aperture compressive X-ray spectral CT,” in Proceedings of IEEE Conference on Sampling Theory and Applications (IEEE2017), pp. 548–551.

23. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An Introduction,” IEEE Signal Process. Mag 31(1), 105–115 (2014). [CrossRef]  

24. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 41(10), B44–B51 (2008). [CrossRef]  

25. H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single Aperture Spectral+ ToF Compressive Camera: Toward Hyperspectral+ Depth Imagery,” IEEE Journal of Selected Topics in Signal Processing 11(7), 992–1003 (2017). [CrossRef]  

26. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag 25(2), 21–30 (2008). [CrossRef]  

27. S. Foucart and H. Rauhut, A mathematical introduction to compressive sensing (Springer2013). [CrossRef]  

28. R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Process. Mag 24(4), 118–121 (2007). [CrossRef]  

29. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE Journal of Selected Topics in Signal Processing 1(4), 586–597 (2007). [CrossRef]  

30. J. Tan, Y. Ma, H. Rueda, D. Baron, and G. R. Arce, “Compressive Hyperspectral Imaging via Approximate Message Passing,” IEEE Journal of Selected Topics in Signal Processing , 10(2), 389–401 (2016). [CrossRef]  

31. F. Wang, Y. Yang, X. Lv, J. Xu, and L. Li, “Feature selection using feature ranking, correlation analysis and chaotic binary particle swarm optimization,” in Proceedings of IEEE Conference on Software Engineering and Service Science (IEEE2014), pp. 305–309.

32. G. Y. Isabelle and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research 3, 1157–1182 (2003).

33. J. D. Li, K. W. Cheng, S. H. Wang, F. Morstatter, R. P. Trevino, J. L. Tang, and H. A. Liu, “Feature selection: A data perspective,” ACM Computing Surveys (CSUR) 6(50), 94 (2017).

34. M. A. Hall, “Correlation-based Feature Selection for Machine Learning,” https://www.cs.waikato.ac.nz/~mhall/thesis.pdf

35. T. L. Moore, “Using Euler’s formula to solve plane separation problems,” The College Mathematics Journal 22(2), 125–130 (1991). [CrossRef]  

36. F. T. Lin, Y. K. Cheng, and C. H. Ching, “Applying the genetic approach to simulated annealing in solving some NP-hard problems,” IEEE Trans. Systems, Man, and Cybernetics 23(6), 1752–1767 (1993) [CrossRef]  

37. K. Hamalainen, A. Harhanen, A. Kallonen, A. Kujanpaa, E. Niemi, and S. Siltanen, “Tomographic X-ray data of a walnut,”http://arxiv.org/abs/1502.04064.

38. W. V. Aarle, W. J. Palenstijn, J. Cant, E. Janssens, F. Bleichrodt, A. Dabravolski, J. D. Beenhouwer, K. J. Batenburg, and J. Sijbers, “Fast and flexible X-ray tomography using the ASTRA toolbox,” Opt. Express 24, 25129–25147 (2016). [CrossRef]   [PubMed]  

39. H. Dai, G. Gu, W. He, F. Liao, J. Zhuang, X. Liu, and Q. Chen, “Adaptive compressed sampling based on extended wavelet trees,” Appl. Opt. 53(29), 6619–6628 (2014). [CrossRef]   [PubMed]  

40. H. Dai, G. Gu, W. He, L. Ye, T. Mao, and Q. Chen, “Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation,” Opt. Express 24(23), 26080–26096 (2016). [CrossRef]   [PubMed]  

41. Y. Adam, C. Thrampoulidis, and G. Wornell, “Analysis and Optimization of Aperture Design in Computational Imaging,” https://arxiv.org/pdf/1712.04541.pdf

42. T. A. Bubba, A. Hauptmann, S. Huotari, J. Rimpelainen, and S. Siltanen, “Tomographic x-ray data of a lotus root filled with attenuating objects,” https://arxiv.org/abs/1609.07299.

43. A. G. Bluman, Elementary Statistics (McGraw Hill2013).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Optical setup of coded aperture X-ray CT. (a) The object is illuminated by coded fan-beam X-ray sources at P positions [S1, S2, · · · , SP], and the projections are captured by a flat detector. Part of the X-ray radiation is blocked by the blocking elements on the coded apertures and the correspondent pixels on the flat detector are discarded. (b) CA represents coded aperture, and the white and black squares represent unblocking and blocking elements, respectively.
Fig. 2
Fig. 2 The structure matrix W, the coded aperture matrix C, the vectorized object f = Ψz, where Ψ is the sparse basis matrix and z is the vectorized sparse representation, and the vectorized measurements y for a fan beam CAXCT system with P = 8 view angles, M = 8 detectors per view angle and a N × N = 4 × 4 image. The CT system matrix is H = . The coded apertures have 50% transmittance, that is, the number of unblocking elements on the coded apertures D = 32.
Fig. 3
Fig. 3 Division of CT system matrix H with 12 rows into 3 submatrices, S1, S2 and S3. (a) H is divided randomly. The 1 st , 4 th , 7 th and 10 th rows are combined in S1, the 3 rd , 5 th , 6 th and 9 th rows are combined in S2, and the 2 nd , 8 th , 11 th and 12 th rows are combined in S3. (b) H is divided such thay 1 st , 2 nd , 3 rd and 4 th rows are combined in S1, the 5 nd , 6 th , 7 th and 8 th rows are combined in S2, and the 9 th , 10 th , 11 th and 12 th rows are combined in S3. (c) H is divided uniformly. The 1 st , 4 th , 7 th and 10 th rows are combined in S1, the 2 nd , 5 th , 8 th and 11 th rows are combined in S2, and the 3 rd , 6 th , 9 th and 12 th rows are combined in S3.
Fig. 4
Fig. 4 (a) “Walnut phantom” and reconstructed images using (b) proposed approach, (c) gradient descent approach [20] and (d) random coded apertures. Absolute error images for (e) proposed approach, (f) gradient descent approach and (g) random coded apertures. Note that more artifacts are present in the reconstructions from random X-ray paths.
Fig. 5
Fig. 5 SVD of the sensing matrix using random and optimal coded apertures. The highest and lowest singular values are highlighted in each case.
Fig. 6
Fig. 6 The reconstructed 128 × 128 image of the “Walnut phantom” with optimized coded apertures at (a) 25%, (b) 50% and (c) 75% subsampling rate; with random coded aperture at (d) 25%, (e) 50% and (f) 75% subsampling rate.
Fig. 7
Fig. 7 The plots of singular values as a function of component numbers at 25%, 50% and 75% subsampling rate for optimized and random coded apertures of 128 × 128 images, respectively. The highest and lowest singular values are highlighted in each case.
Fig. 8
Fig. 8 The reconstructed 256×256 image of the “Walnut phantom” with two-stage optimized coded apertures at (a) 25%, (b) 50% and (c) 75% subsampling rate; “separated division” optimized coded apertures at (d) 25%, (e) 50% and (f) 75% subsampling rate; with random coded aperture at (g) 25%, (h) 50% and (i) 75% subsampling rate.
Fig. 9
Fig. 9 256 × 256 Lotus root reconstructed images with full available projections using (a) GPSR and (b) FPB algorithm.
Fig. 10
Fig. 10 128 × 128 Lotus root reconstructed images using (a) optimized coded apertures, (b) random coded apertures and (c) conventional CT where no coded apertures are used. All reconstructions shown in this figure use the same number of X-ray measurements, 50% subsampling rate.
Fig. 11
Fig. 11 256 × 256 Lotus root reconstructed images using (a) random coded apertures, (b) optimized coded apertures with “separated division” and (c) optimized coded apertures without “separated division”. Absolute error images for (d) random coded apertures, (e) optimized coded apertures with “separated division” and (f) optimized coded apertures without “separated division”. Note that more artifacts are present in the reconstructions from random X-ray paths.
Fig. 12
Fig. 12 The geometric structures of the moments (a), (b) and (c). The red lines, blue lines and green lines represent the X-ray paths in the object, the distances between the origin and the X-ray paths and the critical length, 2 2 N , respectively.

Tables (2)

Tables Icon

Algorithm 1 Local optimization of coded apertures

Tables Icon

Table 1 PSNR and runtime for the proposed approach and the gradient descent approach

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

y ( θ , η ) = α ( θ , η ) I ( θ , η , E ) exp ( μ ( x , y , E ) d x d y ) d E ,
y = CWf ,
z ^ = arg min z y Az 2 2 + λ z 1 ,
A = CH ,
J = AA T 2 2 = CH ( CH ) T 2 2 = CHH T C 2 2 = N 2 CQC 2 2 = N 2 i = 1 MP j = 1 MP ( C i i Q i j C j j ) 2 .
arg min J = arg min i j = Q i j 2 Subject to C i i = 1 and C j j = 1 i , j
PSNR = 20 log 10 ( f max MSE ) ,
corrcoef 1 = 0 ( a N 2 ) ( b N 2 ) ( a N 2 ) ( a N 2 ) 2 ( b N 2 ) ( b N 2 ) 2 ,
corrcoef 2 = 1 N 2 ( a N 2 ) ( b N 2 ) ( a N 2 ) ( a N 2 ) 2 ( b N 2 ) ( b N 2 ) 2 .
Δ = ( corrcoef 2 ) 2 ( corrcoef 1 ) 2 = N 2 2 a b N 2 ( ( a N 2 ) ( a N 2 ) 2 ) ( ( b N 2 ) ( b N 2 ) 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.