Abstract

Compressive Spectral Imaging (CSI) is an emerging technology that aims at reconstructing a spectral image from a limited set of two-dimensional projections. To capture these projections, CSI architectures often combine light dispersive elements with coded apertures or programmable spatial light modulators. This work introduces a novel and compact CSI architecture based on a deformable mirror and a colored-filter detector to produce compressive spatio-spectral projections without the need of a grating or prism. Alongside, we propose a tensor-based reconstruction algorithm to recover the spatial-spectral information from the compressed measurements. Experimental results on both simulated and real datasets demonstrate efficacy of the proposed acquisition architecture and the especially crafted inversion algorithms.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Spectral imaging (SI) has become a powerful analytical tool in various fields, such as remote sensing, agriculture, environmental monitoring and medicine [1–3]. The datasets acquired by a SI system are three-dimensional images that comprise a large amount of spatial information across a multitude of wavelengths. Conventional imaging spectroscopy scan adjacent zones of the underlying scene and combine the readings to construct spectral data cubes [4–6]. Specifically, these sampling techniques are based on pushbroom, whiskbroom or spotlight systems, which measure the signal at a certain constant sampling rate on each of the three dimensions [7]. As a result, these sensing techniques require scanning a number of slices of the datacube that grow linearly in proportion to the desired spatial-spectral resolution. Thus, these systems are impractical in dynamic or fast-moving scenarios.

To alleviate these difficulties, compressive spectral imaging (CSI) has emerged as an alternative acquisition framework based on the compressive sensing (CS) theory that captures and compresses simultaneously the relevant information embedded in the spectral image of interest [8]. CSI aims at recovering a spatio-spectral data cube by acquiring two-dimensional projections of encoded and shifted versions of the spatio-spectral source through exploiting the use of coded apertures and optical dispersive elements [7, 9]. In particular, the original coded aperture snapshot spectral imager (CASSI) paradigm was introduced by the seminal dual disperser (DD-CASSI) [10] architecture, which employed two dispersive elements and one coded aperture, leading to measurements that only encode the spectral content for each spatial position. As an evolutionary step, the single disperser (SD-CASSI) [11] has become one of the most popular spectral imaging sampling schemes since it uses only one coded aperture located a the image plane which is later dispersed by a single prism or grating before being projected onto a detector, producing a particular spatio-spectral coded projection onto the detector, but effectively reducing the calibration complexity in contrast with the DD-CASSI. All CASSI schemes rely on CS sparse reconstruction algorithms to recover the data cube by exploiting spatio-spectral redundancies in a transform domain such as Wavelet and DCT. Several variations of the SD-CASSI have been proposed since its inception, such as the approach in [12] where the prism and coded aperture have been interchanged, generating a different class of spatio-spectral coding and using dictionary learning with overlapped patches for improving the reconstruction quality. Another modification to the CASSI systems that can be used to enhance the reconstruction quality is the use of dynamic optical elements such as a digital micromirror devices (DMDs), which can be used as programmable coded apertures enabling the possibility of taking different projections of the data cube. The schemes for the multishot DD-CASSI [10] and SD-CASSI [11] can be found in Fig. 1(a) and 1(b), respectively. Similarly, a spatial light modulator (SLM) based on LCOS technology enables the ability to produce predetermined spectral filters, and together with the use of the DMD has enabled the development of the colored coded aperture spectral imager (3D-CASSI) [13], providing a rich and flexible multishot spatio-spectral coding scheme, as seen in Fig. 1(c). It is worth noting that the reconstruction quality attained by the 3D-CASSI surpasses the results obtained by the DD-CASSI or SD-CASSI, at the expense of additional optical elements, more sensitivity to calibration errors, and a diminished light throughput.

 figure: Fig. 1

Fig. 1 Diagram of four multishot compressive multispectral cameras: (a) DD-CASSI, (b) SD-CASSI, (c) 3D-CASSI and (d) proposed architecture.

Download Full Size | PPT Slide | PDF

In this regard, compressive spectral imaging (CSI) systems with more flexibility, versatility, better light throughput, and lower calibration complexity are highly desired [14]. Taking advantage of the simplicity of the RGB Bayer filter design, a static color-coded aperture placed in front of the detector array has been included as a robust solution to snapshot multispectral imaging [15], at the expense of light throughput from the filters. However, spatial resolution is severely affected as more spectral bands are demanded [11], which can be enhanced by properly designing the filters [16], which can also be forced to have a defined throughput. Nonetheless, the usefulness of colored filter arrays can be improved by adding a fixed grating [17] or prism [18]. Similar to the case of the multishot CASSI systems, the performance can be greatly improved by adding variability, which can be achieved by moving the dispersive element [19]. In order to not depend on a dispersive element, in [20] a DMD is used for dynamically coding the spatial domain before reaching an array of Fabry-Perot filters placed before the detector, although these filters are narrow and may significantly reduce the light throughput. Moreover, 3D-CASSI finally provides with a mixed solution where both spatial and spectral coding are dynamic; one by the DMD while the other provided by the LCOS SLM, although not as arbitrary nor selective as the Fabry-Perot array. Nevertheless, it is well known that using a binary coded aperture scheme, such as what can be provided by a DMD, may jeopardize even further the diminished light throughput of a colored filter array. In this regard, deformable mirrors (DM) have become an attractive solution as programmable phase coding devices to perform wavefront modulation at the pupil plane to compensate for the effects of atmospheric turbulence in astronomy images [21], correct the aberrations of the eye [22], encoding depth-of-field [23], and performing compressive imaging [24], among others, with a low loss of radiant energy flow. In particular for compressive imaging [24], the DM provides with dynamic phase modulation capabilities that can also enable a multiframe coding approach.

In this work, we propose a novel multishot CSI system along with the mathematical framework required to efficiently reconstruct spectral images from compressed measurements. The optical architecture relies on a DM for the spatial coding, and an imaging sensor with a color-coded aperture for the spectral coding, as can be seen in Fig. 1(d). The main technical benefits of the Deformable Mirror (DM) is that it offers high speed modulation with straightforward calibration and high light throughput due to its controlled continuous phase modulation. Apart from allowing multiple shots, the proposed CSI system enables the design of spatial/spectral coding in arbitrary directions in contrast with CASSI systems. We also propose a methodology that includes the spatial-spectral multiplexing sensing protocol and an alternating direction method of multipliers (ADMM)-based tensor reconstruction algorithm.

2. Optical sensing model of the proposed architecture

Let fo(x, y, λ) denotes the 3D function that represents the spatial-spectral source density with (x, y) being the spatial coordinates and λ being the spectral dimension. Light is firstly transmitted through a beamsplitter to a deformable mirror (phase mask) located on the pupil plane, which allows adding controlled aberrations to fo(x, y, λ). This phase modulation can be expressed as

f1(x,y,λ)=|h(xx,yy,λ)|2fo(x,y,λ)dxdy,
with
h(x,y,λ)=1λz𝒫(x,y)ei2π𝒲(x,y)ei2πλz(xx+yy)dxdy,
where h(x, y, λ) represents the PSF, z ∈ ℝ the image distance, 𝒫(x, y) the pupil function, and 𝒲(x, y) is the wavefront aberration function. Specifically, the wavefront can be expressed as W(x,y)=i=1aiZi(x,y), where Zi(x, y) represents the i-th Zernike polynomial in the Noll’s notation and ai its the aberration coefficient [25]. Although the DM is achromatic in nature note that the fact of using lenses in the system with potential chromatic aberrations introduce the need to model a spectral depedent PSF. The code applied by the color-coded aperture to the spatio-spectral density is f1(x, y, λ)t(x, y, λ), with
t(x,y,λ)=i1,i2,i3Ti1,i2,i3rect(xΔci1,yΔci2,λΔdi3),
where Ti1,i2,i3 ∈ [0, 1] is the coding performed on the (i1, i2, i3)th spectral voxel, and Δc and Δd account for the pixel sizes of the color-coded aperture and focal plane array (FPA) detector, respectively. Furthermore, there exists one-to-one matching between the elements of the color filter array and those of the FPA; thus, the energy in one detector pixel is affected by just one optical filter element. The continuous image on the detector can be then expressed as
g(x,y)=λ1λ2ΔdΔdΔdΔdt(x,y,λ)f1(x,y,λ)dxdydλ.
Equation (4) can be posed as a compressed sensing problem, which is traditionally modeled as the linear system g = Φf+ . In this sense, g represents the discrete vectorized version of the measurements, f is the discrete vectorized version of the spectral image, and Φ ∈ ℝm×n is a matrix including the sensing transfer function with n = NML and m = N1N2K, whit K is the number of snapshots. Here, for each snapshot, the membrane of the deformable mirror is deflected according to an arbitrary Zernike polynomial W(x,y)(k)=i=1ai(k)Zi with k ∈ {1, ..., K}, where W(x, y)(k) represents the membrane deformation for the k-th snapshot. Such linear system requires the product between a big sparse matrix Φ with a vector, demanding a mathematical operation of order 𝒪(mn). To alleviate this problem, we model the acquisition process Eq. (4) in tensor form, following the procedure described in Appendix 5, as
G_=[T_(i=13[W_([j=13F_×jAj]×41)]×iAiT)]×31T
where ∈ ℝM×N×1×K represents the measurements, ∈ ℝM×N×L×K is the color-coded aperture, ∈ ℂM×N×L×K corresponds to the wavefront aberration matrix, A3 = I ∈ ℝL×L is an identity matrix, and matrices A1 ∈ ℝM×M, A2 ∈ ℝN×N are the direct Discrete Fourier transform basis. Note here that the tensorial representation in Eq. (5) preserves the original structure of the data, critical for the analysis of their intrinsic properties. The conventional problem to obtain an estimation of from the set of compressed measurements , considering Eq. (5), is given by
argminF_G_[T_(i=13[W_([j=13F_×jAj]×41)]×iAiT)]×31TF2,
where ‖·‖F represents the Frobenius-norm. To promote correct solutions of from its compressive measurements , CSI theory demands the use of sparsity promoting priors. Sparsity indicates that must be S-sparse in an arbitrary set of orthogonal basis Ψj ∈ ℝNj×Nj with j ∈ {1, 2, 3} and {N1, N2, N3} = {M, N, L}, such that F_=j=13Θ_×jΨj, where Θ̱ ∈ ℝM×N×L is the sparse tensor that contains SMNL non-zero coefficients. Therefore, the reconstruction problem described in Eq. (6) can be rewritten as
argminΘ_G_[T_(i=13[W_([j=13Θ_×jAjΨj]×41)]×iAiT)]×31TF2+λΘ_1,
where ‖·‖1 represents the 1-norm.

3. Proposed reconstruction algorithm

The goal of the proposed algorithm, summarized in Algorithm 1, is to reconstruct Θ from its compressive measurements in Eq. (7). Algorithm 1 solves Eq. (7) via a proximal alternating optimization methodology which finds Θ̱ through

Θ_ι+1argminΘ_G_[T_(i=13[W_([j=13Θ_×jAjΨj]×41)]×iAiT)]×31TF2+βΘ_ιΘ_F+λ1Θ_1+λ2j=13Θ_×jΨj,
where ‖ · ‖TV represent the total variation regularization, λ1 > 0, λ2 > 0 and β > 0 are regularization parameters, where Eq. (8) is solved using an ADMM approach [26]. For this purpose, we introduce the following functions Γ1(Θ_)=G_[T_(i=13[W_Γ2(Θ_)]×iAiT)]×31TF2, Γ2(Θ_)=([j=13Θ_×jAjΨj]×41), Γ3(Ω̱) = βΩ̱ιΩ̱‖F + λΩ̱‖1 and Γ4(Ξ̱) = ‖Ξ̱TV with the splitting variables Ω̱ = Θ̱ and Ξ_=j=13Θ_×jΨj, such that Eq. (8) can be rewritten as
argminΘ_,Ω_,Ξ_Γ1(Θ_)+Γ3(Ω_)+Γ4(Ξ_)s.t.Θ_=Ω_,Ξ_=j=13Θ_×jΨj,
where its augmented Lagrangian is given by (Θ_,Ω_,Ξ_,ν_1,ν_2)=Γ1(Θ_)+Γ3(Ω_)+μ1Θ_Ω_ν1F2+μ2Ξ_j=13Θ_×jΨjν_2F2, where ν̱1 and ν̱2 are the Lagrange multipliers and {μ1, μ2} ≥ 0 are called the AL penalty parameter [26]. Based on (Θ̱, Ω̱, Ξ̱, ν̱1, ν̱2), the alternative form of Eq. (9) can be decoupled in three independent optimization problems
Θ_˜ι+1=argminΘY_[T_(i=13[W_Γ2(Θ_)]×iAiT)]×31TF2+μ1Θ_ιΩ_ιν_1ιF2+μ2Ξ_ιj=13Θ_ι×jΨjν_2ιF2,
Ω_ι+1=argminΩβΩ_ιΩ_F2+μ1Θ_ιΩ_ιν_1ιF2+λΩ_1,
Ξ_ι+1=argminΞΞ_TV+μ2Ξ_ιj=13Θ_ι×jΨjν_2ιF2.
To solve the optimization problems in Eqs. (10)(12), we use the Sherman-Morrison-Woodbury matrix inversion algorithm [27], soft-thresholding, and a fast TV algorithm approach, respectively. The goal of the proposed algorithm, summarized in Algorithm 1, is to reconstruct the datacube from its compressive measurements .

Tables Icon

Algorithm 1:. Spectral image reconstruction

4. Simulations and experimental results

In this section, the performance of the proposed CSI architecture is evaluated using synthetic and experimental compressive measurements. First, the results obtained with the compressive sensing (CS) reconstruction algorithm are presented using the simulated acquisition process of the CSI architecture. Then, a testbed implementation of the proposed system is used for capturing experimental measurements, as can been see in Fig. 2.

 figure: Fig. 2

Fig. 2 Experimental setup implemented in the laboratory.

Download Full Size | PPT Slide | PDF

4.1. Quality metrics

To compare the quality of the rendered spectral images obtained with the proposed architecture those obtained by against the SD-CASSI, DD-CASSI, and 3D-CASSI architectures, two metrics were selected, the peak signal-to-noise-ratio (PSNR), defined as,

PSNR=1Li3=1L[10log10(max(F˜:,:,i3)2MSE(F,F˜))],
and the spectral angular mapper (SAM), defined as,
SAM=1MNi1=1Mi2=1Ncos1(i3=1LFi1,i2,i3F˜i1,i2,i3F:,:,i32F˜:,:,i32),
where max(·) : ℝM×N → ℝ is a function that returns the maximum possible value of :,:,i3, MSE is the mean squared error between F:,:,i3 and its approximation :,:,i3, ‖:,:,i32 = ‖vec(:,:,i3)‖2, and i3=1LF˜i1,i2,i3Fi1,i2,i3=vec(F˜:,:,i3),vec(F˜:,:,i3) with 〈·, ·〉 representing the dot product between two vectors. Thus, the PSNR metric is defined as the ratio between the maximum possible power of a signal and the power of the corrupting noise related to the fidelity of the reconstruction, while the SAM metric provides the spectral angle between the spectral signature of the reference image and its approximation.

4.2. Simulated acquisition process

In order to verify the proposed CSI architecture, the compressive measurements are simulated using the forward model in Eq. (5), adding Gaussian noise with zero mean and variance given by a specific signal-to-noise-ratio (SNR). The spectral images used to test the performance of the proposed architecture are Beads [28], Feathers [28], Ribeira [29], and Flowers [29]. The spectral images have a spatial resolution of M × N = 512 × 512 and L = 26 spectral bands. The transmission function of the fixed color-coded aperture is generated using a Bernoulli distribution with p = 0.5. For each data set, we test three noise scenarios with signal-to-noise ratios (SNR) of {20, 30, 50} [dB], and a compression level of c=100×(1KL)~{93%,90%,85%}, obtained using K = {2, 3, 4} snapshots, respectively. The latter means that the spectral estimation is attained from only ∼ {7%, 10%, 15%} of the original data. To select the set of Zernike polynomials that exhibit better performance in the proposed architecture, a cross-validation analysis by varying the used of two Zernike polynomials for the reconstruction of the spectral image was developed, as can been see in Fig. 3. Specifically, this analysis was developed in two aspects. First, one of the measurements is simulated by deforming the surface of the DM from using a pure Zernike polynomial W(x, y) = aiZi, and the second measurement is simulated by setting the DM without aberration in its surface W(x, y) = 0, as can been see in Fig. 3(a). Second, one of the measurements is simulated by deforming the surface of the DM from using random Zernike coefficients W(x,y)=i=415aiZi, and the second measurement is simulated by deforming the surface of the DM from using a pure Zernike polynomial W(x, y) = aiZi, as can been see in Fig. 3(b). For the second case, one hundred runs were performed and the mean values were calculated. Based on the simulation results, illustrated in Fig. 3, the proposed architecture exhibit better performance when the deformation in the first snapshot is ai = 0 and the following snapshots are acquired with i = {12, 13, 5, 14, 15} and ai = 0.4, respectively. This sequence of Zernike polynomials is select taking into account the trade-off between PSNR and SAM performance.

 figure: Fig. 3

Fig. 3 Spectral image reconstruction with K = 2 and SNR = 50[dB]. Here, for the first snapshot, in the two cases (a) and (b), the measurement is generated by using W(x, y) = aiZi, where i ∈ {4, ..., 15} and ai ∈ {0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5}. For the second snapshot, the measurement is generated by using (a) W(x, y) = 0 and (b) W(x,y)i=415aiZi.

Download Full Size | PPT Slide | PDF

To evaluate the impact of the total of acquisitions and noise level on the quality of the reconstruction, we analyze the performance of the SD-CASSI, DD-CASSI, 3D-CASSI and the proposed architecture in terms of the PSNR metric. In Fig. 4, it can be seen that the perfomance of the proposed novel approach is the best in all cases, and the improvement in performance of the proposed architecture in contrast with the traditional architectures diminishes as the amount of snapshots grow. Specifically, the Zernike polynomials used in Fig. 4 were {Z0, Z12}, {Z0, Z12, Z13} and {Z0, Z12, Z13, Z5}, for K = {2, 3, 4}, respectively. Moreover, the compressive measurements generated with the aforementioned Zernike polynomials are illustrated in Z0 (Fig. 4(b)), Z5 (Fig. 4(c)), Z12 (Fig. 4(d)) and Z13 (Fig. 4(e)). Here, it is important to note that even if the proposed architecture improves its performance as more snapshots are taken, an optimum trade-off between the compression ratio and reconstruction quality has been found when K = 3. As such, the following experiments are developed for K = 3, where the compressive measurements are generated from the Zernike polynomials Z0, Z12 and Z13 with ai = 0.4.

 figure: Fig. 4

Fig. 4 Reconstruction quality analysis for K = {2, 3, 4} and SNR = {20, 30, 50}. Furthermore, a measurement example generated from the Zernike polynomials (b) W = 0, (c) W = Z5, (d) W = Z12 and (e) W = Z13 are illustrated.

Download Full Size | PPT Slide | PDF

Table 1 illustrates the averaged spatial and reconstruction quality measured in terms of both metrics. For clarity, the best results are bold-faced and the second best are underlined. There, it can be seen that the proposed architecture overcomes the results obtained with the 3D-CASSI, DD-CASSI and SD-CASSI architecture overall in all the noise levels. Specifically, the proposed architecture gains up to 0.91, 1.03 and 5.4 dB in averaged PSNR, against the 3D-CASSI, DD-CASSI and SD-CASSI architecture, and attains 1.6°, 2.7° and 2.8° less spectral degrees in averaged SAM against the 3D-CASSI, DD-CASSI and SD-CASSI architecture, respectively. Figure 5(a) illustrates an RGB composite of the attained reconstructions for the SD-CASSI, DD-CASSI, 3D-CASSI and proposed approaches with a compression ratio of c ∼ 90% (K = 3) and a noise level of SNR = 20 [dB]. Further results with all the datasets can be appreciated online in the supplemental Visualization 1. To verify the accuracy of the experimental results, Fig. 5(b) shows the cumulative absolute error between the ground-truth Beads image and the reconstruction. It can be observed that the error obtained by the proposed approach is lower than that of 3D-CASSI, DD-CASSI, and SD-CASSI, approaches, respectively. Finally, Fig. 5(c) shows a visual comparison between 8 out of the 26 spectral bands. To evaluate the spectral accuracy of the reconstructions, three different points of each scene are illustrated in Fig. (6). This comparison shows that despite the good approximation of the spectra obtained with the SD-CASSI, DD-CASSI, and 3D-CASSI approaches, the proposed architecture provides better approximations of the spectral information. Overall, in Fig. 5 and Fig. 6 can be observed that the proposed architecture overcomes the reconstructions obtained from the rest of the evaluated architectures.

Tables Icon

Table 1. Reconstruction Results Comparison for Simulations using K = 3.

 figure: Fig. 5

Fig. 5 ( Visualization 1) (a) Visual comparison of the spectral reconstructions, mapped to an RGB profile, (b) absolute error of the reconstructions, and (c) comparison of 8 out of the 26 spectral bands. These simulations were developed with the parameters SNR = 20 [dB] and K = 3.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 Visual comparison of spectral signatures for reconstruction from SD-CASSI, DD-CASSI, 3D-CASSI and proposed architecture

Download Full Size | PPT Slide | PDF

4.3. Experimental results

To experimentally evaluate the effectiveness of the proposed architecture, we built a proof-of-concept CSI system using a monochrome detector as seen in Fig. 2. The laboratory prototype is composed of a main objective lens coupled with a 4f system that has a phase modulating element at 2f. The detector array (Point Grey Grasshopper GS3-U3-23S6M-C) is placed at the image plane. In detail, the intermediate image plane is formed by a 16mm objective lens (Computar M1614-MP2) which is relayed by a pair of 75mm Fourier transforming lenses (Thorlabs AC254-075-A-ML). Using a beamsplitter (BS, Thorlabs CCM1-BS013), we placed a deformable mirror (DM, Thorlabs DMP40-P01) at the pupil plane at a distance of 2f = 150mm from the intermediate image plane. Finally, the detector array si placed at a distance of 2f = 150mm from the deformable mirror. Also, spectral filters mounted on a filter wheel are used before the camera. Specifically, a set of twelve color filters of 10 Δλ nm with steps of ∼ 20 nm between the 460–680 nm spectral range were used. In this way, we can emulate any potential colored-filter array detector. Similarly to the simulations, the transmission function of the emulated color-coded aperture is generated using a Bernoulli distribution with p = 0.5. For the experiments we used two kind of targets to acquire the experimental spectral datasets with the implemented testbed: an OLED screen with a spatial resolution of 128 × 128 (OLED-128-G2, 4D Systems) with a displayed RGB image and a constructed scene using a collection of different colored toys. Using the OLED screen, an image composed of several color beads is loaded into the screen and the resulting “Digital beads” dataset exhibit a spatial resolution of M × N = 128 × 128 pixels, and L = 12 spectral bands. On the other hand, the constructed scene was illuminated by a 125 [W] mercury lamp and a calibrated quartz tungsten-halogen (QTH) lamp (Newport QTH 63355), recording the “Toys” dataset with a spatial resolution of M × N = 512 × 512 pixels, and L = 12 spectral bands. Moreover, to validate the spectral reconstruction fidelity, several spatial points of the real scenes were also measured with a spectrometer in the same range of the color filters (SR-2500 Spectral Evolution). In both experiments, we considered a compression ratio of c ∼ 25%, obtained when only K = 3 snapshots are captured. Figure 7(a) and 8(a) show an RGB spectral-mapped version of the two reconstructed scenes from compressive measurements for the “Digital beads” and “Toys” scenes acquired with the proposed architecture. Although spectral reconstruction can be seen from Fig. 7(c) and 8(c) for several bands, only the specific spectra shown in Fig. 7(b) and 8(b) corroborate the accuracy and spectrum resolution achieved by the experimental reconstructed results when using the proposed architecture with our novel reconstruction algorithm.

 figure: Fig. 7

Fig. 7 Experimental results using the “Digital beads” scene, where (a) is an RGB version of the reconstruction and (b) is a visualization of 12 reconstructed spectral bands. Spectral signature reconstructions of four points of the scene, using K = 3 snapshot measurements.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8

Fig. 8 Experimental result using the “Toys” scene, where (a) is an RGB version of the reconstruction and (b) is a visualization of 12 reconstructed spectral bands. (c) Spectral signature reconstructions of four points of the scene, using K = 3 snapshot measurements.

Download Full Size | PPT Slide | PDF

5. Conclusion

In this article, we presented a novel and compact multishot CSI architecture based on a deformable mirror and a colored-filter detector to produce arbitrary spatio-spectral projections. The image reconstruction problem was formulated as an inverse problem with two regularization functions, ensuring a smooth reconstructed image and a sparse decomposition of the image in an appropriate representation basis, respectively. We developed an ADMM algorithm in tensor representation to efficiently solve the high-dimensional reconstruction problem. The proposed architecture is compared with three state-of-the art CSI architectures on four spectral images, where simulations results showed an improvement of up to 5.4 dB in PSNR and attained 2.8 less spectral degrees in averaged SAM when the proposed CSI scheme was using as few as 10% of the data. In contrast with traditional CSI architectures, the proposed spectral imaging system established a better trade-off between reconstruction quality and light throughput. In particular, the obtained simulated and experimental reconstruction results validate the idea that moderate phase coding coupled with a colored filter detector is a useful and practical tool to dynamically modulate information in the spatial and spectral domain, allowing a more compact CSI system.

Appendix Preliminaries on tensors

In this section, we first summarize some main notations and then introduce some useful product, decomposition, and tensorial representation properties [30, 31]. Here, scalars are denoted as lowercase letters (a, b, . . .), vectors as bold-faced lowercase letters (a, b,. . .), matrices as bold-faced capital letters (A, B,. . .), and tensors as underlined bold-faced capital letters (, , . . .).

  • Tensor notation: Let ∈ ℂN1×N2×...×NT be a T-dimensional tensor with a vector representation x = vec(), where xn=1TNn and vec():{N1×N2××NTn=1TNn}.
  • n-Mode product: Let ∈ ℂN1×...×NT and A ∈ ℂM×Nn be an T-dimensional tensor and a matrix, respectively. Then, n-mode tensor by the matrix product ( = ×n A) ∈ ℂN1×Nn−1×M×n+1×...×NT is defined by
    yi1,,in1,j,in+1,,iT=inNnxi1,,in,,iTaj,in,
    where ik ∈ {1, 2, ..., Nk} with kn and j ∈ {1, 2, ..., M} with {Nk, M} ∈ ℕ.
  • Tensor decomposition: Let ∈ ℂN1×...×NT and An ∈ ℂNn×Nn be a T-dimensional tensor and orthogonal matrices, respectively. Then, a tensor representation of in the orthogonal basis An can be expressed as
    Θ_=n=1TX_×nAn,
    where Θ̱ ∈ ℂN1×...×NT represents the core tensor which reveals the underlying interactions between the orthogonal matrices Ai with X_=n=1TΘ_×nAnT, and X_=i=1T(j=1TX_×jAj)×iAiT=i=1TX_×iAiTAi with AiTAi=Ii.
  • Tensor notation to linear-matrix model: Let Ai ∈ ℂMi×Ni and ∈ ℂN1×...×NT a collection of matrices and T-dimensional tensor, respectively. Then, the N-mode product between and the set of matrices Ai can be expressed in matrix notation as
    y=vec(Y_)=vec[nTX_×nAn]=Avec(X_),
    where A(iTMi)×(iTNi) with A = (ATAT−1 ⊗ ... ⊗ A1), yi=1TMi and yi=1TMi are the vector versions of and , respectively.

Funding

Fondo Nacional de Ciencia y Tecnologia (FONDECYT) (1181943); Redes Etapa Inicial (REDI) (REDI170539); Departamento Administrativo de Ciencia, Tecnología e Innovación (COLCIEN-CIAS) (FP44842-309-2018); Programa regional MATH-AmSud (MATHAMSUD)(10-2017)

References

1. M. Kim, Y. Chen, and P. Mehl, “Hyperspectral reflectance and fluorescence imaging system for food quality and safety,” Transactions of the ASAE 44(3), 721–729 (2001).

2. C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016). [CrossRef]  

3. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” Journal of Biomedical Optics 19(1), 010901 (2014). [CrossRef]  

4. Y. Schechner and S. Nayar, “Generalized mosaicing,” in IEEE International Conference on Computer Vision, (IEEE, 2001), pp. 17–24.

5. N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000). [CrossRef]  

6. R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998). [CrossRef]  

7. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016). [CrossRef]  

8. S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing 56(6), 2346 (2008). [CrossRef]  

9. G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014). [CrossRef]  

10. M. Gehm, R. John, D. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef]   [PubMed]  

11. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef]   [PubMed]  

12. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014). [CrossRef]  

13. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014). [CrossRef]   [PubMed]  

14. X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015). [CrossRef]  

15. Y. Murakami, M. Yamaguchi, and N. Ohyama, “Hybrid-resolution multispectral imaging using color filter array,” Opt. Express 20(7), 7173–7183 (2012). [CrossRef]   [PubMed]  

16. H. Arguello and G. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Transactions on Image Processing 23(4), 1896–1908 (2014). [CrossRef]   [PubMed]  

17. C. V. Correa, H. Arguello, and G. R. Arce, “Snapshot colored compressive spectral imager,” J. Opt. Soc. Am. A 32(10), 1754–1763 (2015). [CrossRef]  

18. S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017). [CrossRef]  

19. C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016). [CrossRef]  

20. K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018). [CrossRef]  

21. P. Madec, “Overview of deformable mirror technologies for adaptive optics and astronomy,” Proc. SPIE 8447, 844705 (2012). [CrossRef]  

22. E. Fernández and P. Artal, “Membrane deformable mirror for adaptive optics: performance limits in visual optics,” Opt. Express 11(9), 1056–1069 (2003). [CrossRef]   [PubMed]  

23. M. D. Robinson, G. Feng, and D. G. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009). [CrossRef]  

24. E. Vera and P. Meza, “Snapshot compressive imaging using aberrations,” Opt. Express 26(2), 1206–1218 (2018). [CrossRef]   [PubMed]  

25. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. A 66(3), 207–211 (1976). [CrossRef]  

26. M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20(3), 681–695 (2011). [CrossRef]  

27. M. Akgün, J. Garcelon, and R. Haftka, “Fast exact linear and non-linear structural reanalysis and the sherman–morrison–woodbury formulas,” International Journal for Numerical Methods in Engineering 50(7), 1587–1606 (2001). [CrossRef]  

28. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010). [CrossRef]   [PubMed]  

29. D. H. Foster, K. Amano, S. M. Nascimento, and M. J. Foster, “Frequency of metamerism in natural scenes,” J. Opt. Soc. Am. A 23(10), 2359–2372 (2006). [CrossRef]  

30. S. Rabanser, O. Shchur, and S. Günnemann, “Introduction to tensor decompositions and their applications in machine learning,” https://arxiv.org/abs/1711.10781.

31. N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. M. Kim, Y. Chen, and P. Mehl, “Hyperspectral reflectance and fluorescence imaging system for food quality and safety,” Transactions of the ASAE 44(3), 721–729 (2001).
  2. C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
    [Crossref]
  3. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” Journal of Biomedical Optics 19(1), 010901 (2014).
    [Crossref]
  4. Y. Schechner and S. Nayar, “Generalized mosaicing,” in IEEE International Conference on Computer Vision, (IEEE, 2001), pp. 17–24.
  5. N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
    [Crossref]
  6. R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
    [Crossref]
  7. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
    [Crossref]
  8. S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing 56(6), 2346 (2008).
    [Crossref]
  9. G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
    [Crossref]
  10. M. Gehm, R. John, D. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007).
    [Crossref] [PubMed]
  11. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
    [Crossref] [PubMed]
  12. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014).
    [Crossref]
  13. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
    [Crossref] [PubMed]
  14. X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
    [Crossref]
  15. Y. Murakami, M. Yamaguchi, and N. Ohyama, “Hybrid-resolution multispectral imaging using color filter array,” Opt. Express 20(7), 7173–7183 (2012).
    [Crossref] [PubMed]
  16. H. Arguello and G. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Transactions on Image Processing 23(4), 1896–1908 (2014).
    [Crossref] [PubMed]
  17. C. V. Correa, H. Arguello, and G. R. Arce, “Snapshot colored compressive spectral imager,” J. Opt. Soc. Am. A 32(10), 1754–1763 (2015).
    [Crossref]
  18. S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017).
    [Crossref]
  19. C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016).
    [Crossref]
  20. K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
    [Crossref]
  21. P. Madec, “Overview of deformable mirror technologies for adaptive optics and astronomy,” Proc. SPIE 8447, 844705 (2012).
    [Crossref]
  22. E. Fernández and P. Artal, “Membrane deformable mirror for adaptive optics: performance limits in visual optics,” Opt. Express 11(9), 1056–1069 (2003).
    [Crossref] [PubMed]
  23. M. D. Robinson, G. Feng, and D. G. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
    [Crossref]
  24. E. Vera and P. Meza, “Snapshot compressive imaging using aberrations,” Opt. Express 26(2), 1206–1218 (2018).
    [Crossref] [PubMed]
  25. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. A 66(3), 207–211 (1976).
    [Crossref]
  26. M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20(3), 681–695 (2011).
    [Crossref]
  27. M. Akgün, J. Garcelon, and R. Haftka, “Fast exact linear and non-linear structural reanalysis and the sherman–morrison–woodbury formulas,” International Journal for Numerical Methods in Engineering 50(7), 1587–1606 (2001).
    [Crossref]
  28. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010).
    [Crossref] [PubMed]
  29. D. H. Foster, K. Amano, S. M. Nascimento, and M. J. Foster, “Frequency of metamerism in natural scenes,” J. Opt. Soc. Am. A 23(10), 2359–2372 (2006).
    [Crossref]
  30. S. Rabanser, O. Shchur, and S. Günnemann, “Introduction to tensor decompositions and their applications in machine learning,” https://arxiv.org/abs/1711.10781 .
  31. N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
    [Crossref]

2018 (2)

K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
[Crossref]

E. Vera and P. Meza, “Snapshot compressive imaging using aberrations,” Opt. Express 26(2), 1206–1218 (2018).
[Crossref] [PubMed]

2017 (2)

S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017).
[Crossref]

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

2016 (3)

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
[Crossref]

C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016).
[Crossref]

2015 (2)

C. V. Correa, H. Arguello, and G. R. Arce, “Snapshot colored compressive spectral imager,” J. Opt. Soc. Am. A 32(10), 1754–1763 (2015).
[Crossref]

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

2014 (5)

H. Arguello and G. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Transactions on Image Processing 23(4), 1896–1908 (2014).
[Crossref] [PubMed]

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” Journal of Biomedical Optics 19(1), 010901 (2014).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref] [PubMed]

G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
[Crossref]

2012 (2)

Y. Murakami, M. Yamaguchi, and N. Ohyama, “Hybrid-resolution multispectral imaging using color filter array,” Opt. Express 20(7), 7173–7183 (2012).
[Crossref] [PubMed]

P. Madec, “Overview of deformable mirror technologies for adaptive optics and astronomy,” Proc. SPIE 8447, 844705 (2012).
[Crossref]

2011 (1)

M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20(3), 681–695 (2011).
[Crossref]

2010 (1)

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010).
[Crossref] [PubMed]

2009 (1)

M. D. Robinson, G. Feng, and D. G. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[Crossref]

2008 (2)

2007 (1)

2006 (1)

2003 (1)

2001 (2)

M. Kim, Y. Chen, and P. Mehl, “Hyperspectral reflectance and fluorescence imaging system for food quality and safety,” Transactions of the ASAE 44(3), 721–729 (2001).

M. Akgün, J. Garcelon, and R. Haftka, “Fast exact linear and non-linear structural reanalysis and the sherman–morrison–woodbury formulas,” International Journal for Numerical Methods in Engineering 50(7), 1587–1606 (2001).
[Crossref]

2000 (1)

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
[Crossref]

1998 (1)

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

1976 (1)

R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. A 66(3), 207–211 (1976).
[Crossref]

Afonso, M.

M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20(3), 681–695 (2011).
[Crossref]

Akgün, M.

M. Akgün, J. Garcelon, and R. Haftka, “Fast exact linear and non-linear structural reanalysis and the sherman–morrison–woodbury formulas,” International Journal for Numerical Methods in Engineering 50(7), 1587–1606 (2001).
[Crossref]

Allison, B. J.

C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
[Crossref]

Amano, K.

Arce, G.

C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016).
[Crossref]

H. Arguello and G. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Transactions on Image Processing 23(4), 1896–1908 (2014).
[Crossref] [PubMed]

G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
[Crossref]

Arce, G. R.

Arguello, H.

C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016).
[Crossref]

C. V. Correa, H. Arguello, and G. R. Arce, “Snapshot colored compressive spectral imager,” J. Opt. Soc. Am. A 32(10), 1754–1763 (2015).
[Crossref]

G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
[Crossref]

H. Arguello and G. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Transactions on Image Processing 23(4), 1896–1908 (2014).
[Crossref] [PubMed]

Aronsson, M.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Artal, P.

Baek, S.

S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017).
[Crossref]

Bioucas-Dias, J.

M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20(3), 681–695 (2011).
[Crossref]

Brady, D.

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
[Crossref]

A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
[Crossref] [PubMed]

M. Gehm, R. John, D. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007).
[Crossref] [PubMed]

Brady, D. J.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

Cambareri, V.

K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
[Crossref]

Cao, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

Carin, L.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
[Crossref]

S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing 56(6), 2346 (2008).
[Crossref]

Chen, Y.

M. Kim, Y. Chen, and P. Mehl, “Hyperspectral reflectance and fluorescence imaging system for food quality and safety,” Transactions of the ASAE 44(3), 721–729 (2001).

Chippendale, B.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Chovit, C.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Chrien, T.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Correa, C.

C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016).
[Crossref]

Correa, C. V.

Dai, Q.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref] [PubMed]

De Lathauwer, L.

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

Degraux, K.

K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
[Crossref]

Eastwood, M.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Faloutsos, C.

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

Faust, J.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Fei, B.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” Journal of Biomedical Optics 19(1), 010901 (2014).
[Crossref]

Feng, G.

M. D. Robinson, G. Feng, and D. G. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[Crossref]

Fernández, E.

Figueiredo, M.

M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20(3), 681–695 (2011).
[Crossref]

Foster, D. H.

Foster, M. J.

Fu, X.

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

Garcelon, J.

M. Akgün, J. Garcelon, and R. Haftka, “Fast exact linear and non-linear structural reanalysis and the sherman–morrison–woodbury formulas,” International Journal for Numerical Methods in Engineering 50(7), 1587–1606 (2001).
[Crossref]

Gat, N.

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
[Crossref]

Geelen, B.

K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
[Crossref]

Gehm, M.

Green, R.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Gutierrez, D.

S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017).
[Crossref]

Haftka, R.

M. Akgün, J. Garcelon, and R. Haftka, “Fast exact linear and non-linear structural reanalysis and the sherman–morrison–woodbury formulas,” International Journal for Numerical Methods in Engineering 50(7), 1587–1606 (2001).
[Crossref]

Hinojosa, C.

C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016).
[Crossref]

Huang, K.

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

Iso, D.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010).
[Crossref] [PubMed]

Jacques, L.

K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
[Crossref]

Ji, S.

S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing 56(6), 2346 (2008).
[Crossref]

John, R.

Kim, I.

S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017).
[Crossref]

Kim, M.

S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017).
[Crossref]

M. Kim, Y. Chen, and P. Mehl, “Hyperspectral reflectance and fluorescence imaging system for food quality and safety,” Transactions of the ASAE 44(3), 721–729 (2001).

Kittle, D.

G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
[Crossref]

Lafruit, G.

K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
[Crossref]

Lin, S.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

Lin, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref] [PubMed]

Liu, Y.

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref] [PubMed]

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014).
[Crossref]

Llull, P.

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

Lu, G.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” Journal of Biomedical Optics 19(1), 010901 (2014).
[Crossref]

Madec, P.

P. Madec, “Overview of deformable mirror technologies for adaptive optics and astronomy,” Proc. SPIE 8447, 844705 (2012).
[Crossref]

Mehl, P.

M. Kim, Y. Chen, and P. Mehl, “Hyperspectral reflectance and fluorescence imaging system for food quality and safety,” Transactions of the ASAE 44(3), 721–729 (2001).

Meza, P.

Mian, A.

C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
[Crossref]

Mitsunaga, T.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010).
[Crossref] [PubMed]

Murakami, Y.

Nansen, C.

C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
[Crossref]

Nascimento, S. M.

Nayar, S.

Y. Schechner and S. Nayar, “Generalized mosaicing,” in IEEE International Conference on Computer Vision, (IEEE, 2001), pp. 17–24.

Nayar, S. K.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010).
[Crossref] [PubMed]

Noll, R. J.

R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. A 66(3), 207–211 (1976).
[Crossref]

Ohyama, N.

Orlesa, W.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Papalexakis, E.

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

Pavri, B.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Robinson, M. D.

M. D. Robinson, G. Feng, and D. G. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[Crossref]

Sarture, C.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Schechner, Y.

Y. Schechner and S. Nayar, “Generalized mosaicing,” in IEEE International Conference on Computer Vision, (IEEE, 2001), pp. 17–24.

Schulz, T.

Sidiropoulos, N.

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

Simmons, C.

C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
[Crossref]

Singh, K.

C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
[Crossref]

Solis, M.

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Stork, D. G.

M. D. Robinson, G. Feng, and D. G. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[Crossref]

Tsai, T. H.

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

Vera, E.

Wagadarikar, A.

Wetzstein, G.

Willett, R.

Wu, J.

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014).
[Crossref]

Xue, Y.

S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing 56(6), 2346 (2008).
[Crossref]

Yamaguchi, M.

Yasuma, F.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010).
[Crossref] [PubMed]

Yuan, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

Yue, T.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

Zhu, R.

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

ACM Transactions on Graphics (2)

X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Transactions on Graphics 33(6), 233 (2014).
[Crossref]

S. Baek, I. Kim, D. Gutierrez, and M. Kim, “Compact single-shot hyperspectral imaging using a prism,” ACM Transactions on Graphics 36(6), 217 (2017).
[Crossref]

Appl. Opt. (1)

IEEE Journal of Selected Topics in Signal Processing (1)

X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE Journal of Selected Topics in Signal Processing 9(6), 964–976 (2015).
[Crossref]

IEEE Signal Processing Magazine (2)

G. Arce, D. Brady, L. Carin, H. Arguello, and D. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Processing Magazine 31(1), 105–115 (2014).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Processing Magazine 33(5), 95–108 (2016).
[Crossref]

IEEE Transactions on Computational Imaging (1)

K. Degraux, V. Cambareri, B. Geelen, L. Jacques, and G. Lafruit, “Multispectral compressive imaging strategies using fabry–pérot filtered sensors,” IEEE Transactions on Computational Imaging 4(4), 661–673 (2018).
[Crossref]

IEEE Transactions on Image Processing (3)

M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing 20(3), 681–695 (2011).
[Crossref]

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Processing 19(9), 2241–2253 (2010).
[Crossref] [PubMed]

H. Arguello and G. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Transactions on Image Processing 23(4), 1896–1908 (2014).
[Crossref] [PubMed]

IEEE Transactions on Signal Processing (2)

S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing 56(6), 2346 (2008).
[Crossref]

N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing 65(13), 3551–3582 (2017).
[Crossref]

International Journal for Numerical Methods in Engineering (1)

M. Akgün, J. Garcelon, and R. Haftka, “Fast exact linear and non-linear structural reanalysis and the sherman–morrison–woodbury formulas,” International Journal for Numerical Methods in Engineering 50(7), 1587–1606 (2001).
[Crossref]

J. Opt. Soc. Am. A (3)

Journal of Biomedical Optics (1)

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” Journal of Biomedical Optics 19(1), 010901 (2014).
[Crossref]

Journal of Food Engineering (1)

C. Nansen, K. Singh, A. Mian, B. J. Allison, and C. Simmons, “Using hyperspectral imaging to characterize consistency of coffee brands and their respective roasting classes,” Journal of Food Engineering 190, 34–39 (2016).
[Crossref]

Opt. Express (4)

Opt. Lett. (1)

Optical Engineering (1)

C. Correa, C. Hinojosa, G. Arce, and H. Arguello, “Multiple snapshot colored compressive spectral imager,” Optical Engineering 56(4), 041309 (2016).
[Crossref]

Proc. SPIE (3)

P. Madec, “Overview of deformable mirror technologies for adaptive optics and astronomy,” Proc. SPIE 8447, 844705 (2012).
[Crossref]

M. D. Robinson, G. Feng, and D. G. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[Crossref]

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000).
[Crossref]

Remote Sensing of Environment (1)

R. Green, M. Eastwood, C. Sarture, T. Chrien, M. Aronsson, B. Chippendale, J. Faust, B. Pavri, C. Chovit, M. Solis, and W. Orlesa, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (aviris),” Remote Sensing of Environment 65(3), 227–248 (1998).
[Crossref]

Transactions of the ASAE (1)

M. Kim, Y. Chen, and P. Mehl, “Hyperspectral reflectance and fluorescence imaging system for food quality and safety,” Transactions of the ASAE 44(3), 721–729 (2001).

Other (2)

Y. Schechner and S. Nayar, “Generalized mosaicing,” in IEEE International Conference on Computer Vision, (IEEE, 2001), pp. 17–24.

S. Rabanser, O. Shchur, and S. Günnemann, “Introduction to tensor decompositions and their applications in machine learning,” https://arxiv.org/abs/1711.10781 .

Supplementary Material (1)

NameDescription
» Visualization 1       Spectral imaging reconstructions using the proposed approach for 4 simulated datasets

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Diagram of four multishot compressive multispectral cameras: (a) DD-CASSI, (b) SD-CASSI, (c) 3D-CASSI and (d) proposed architecture.
Fig. 2
Fig. 2 Experimental setup implemented in the laboratory.
Fig. 3
Fig. 3 Spectral image reconstruction with K = 2 and SNR = 50[dB]. Here, for the first snapshot, in the two cases (a) and (b), the measurement is generated by using W(x, y) = aiZi, where i ∈ {4, ..., 15} and ai ∈ {0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5}. For the second snapshot, the measurement is generated by using (a) W(x, y) = 0 and (b) W ( x , y ) i = 4 15 a i Z i .
Fig. 4
Fig. 4 Reconstruction quality analysis for K = {2, 3, 4} and SNR = {20, 30, 50}. Furthermore, a measurement example generated from the Zernike polynomials (b) W = 0, (c) W = Z5, (d) W = Z12 and (e) W = Z13 are illustrated.
Fig. 5
Fig. 5 ( Visualization 1) (a) Visual comparison of the spectral reconstructions, mapped to an RGB profile, (b) absolute error of the reconstructions, and (c) comparison of 8 out of the 26 spectral bands. These simulations were developed with the parameters SNR = 20 [dB] and K = 3.
Fig. 6
Fig. 6 Visual comparison of spectral signatures for reconstruction from SD-CASSI, DD-CASSI, 3D-CASSI and proposed architecture
Fig. 7
Fig. 7 Experimental results using the “Digital beads” scene, where (a) is an RGB version of the reconstruction and (b) is a visualization of 12 reconstructed spectral bands. Spectral signature reconstructions of four points of the scene, using K = 3 snapshot measurements.
Fig. 8
Fig. 8 Experimental result using the “Toys” scene, where (a) is an RGB version of the reconstruction and (b) is a visualization of 12 reconstructed spectral bands. (c) Spectral signature reconstructions of four points of the scene, using K = 3 snapshot measurements.

Tables (2)

Tables Icon

Algorithm 1: Spectral image reconstruction

Tables Icon

Table 1 Reconstruction Results Comparison for Simulations using K = 3.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

f 1 ( x , y , λ ) = | h ( x x , y y , λ ) | 2 f o ( x , y , λ ) d x d y ,
h ( x , y , λ ) = 1 λ z 𝒫 ( x , y ) e i 2 π 𝒲 ( x , y ) e i 2 π λ z ( x x + y y ) d x d y ,
t ( x , y , λ ) = i 1 , i 2 , i 3 T i 1 , i 2 , i 3 rect ( x Δ c i 1 , y Δ c i 2 , λ Δ d i 3 ) ,
g ( x , y ) = λ 1 λ 2 Δ d Δ d Δ d Δ d t ( x , y , λ ) f 1 ( x , y , λ ) d x d y d λ .
G _ = [ T _ ( i = 1 3 [ W _ ( [ j = 1 3 F _ × j A j ] × 4 1 ) ] × i A i T ) ] × 3 1 T
arg min F _ G _ [ T _ ( i = 1 3 [ W _ ( [ j = 1 3 F _ × j A j ] × 4 1 ) ] × i A i T ) ] × 3 1 T F 2 ,
arg min Θ _ G _ [ T _ ( i = 1 3 [ W _ ( [ j = 1 3 Θ _ × j A j Ψ j ] × 4 1 ) ] × i A i T ) ] × 3 1 T F 2 + λ Θ _ 1 ,
Θ _ ι + 1 arg min Θ _ G _ [ T _ ( i = 1 3 [ W _ ( [ j = 1 3 Θ _ × j A j Ψ j ] × 4 1 ) ] × i A i T ) ] × 3 1 T F 2 + β Θ _ ι Θ _ F + λ 1 Θ _ 1 + λ 2 j = 1 3 Θ _ × j Ψ j ,
arg min Θ _ , Ω _ , Ξ _ Γ 1 ( Θ _ ) + Γ 3 ( Ω _ ) + Γ 4 ( Ξ _ ) s . t . Θ _ = Ω _ , Ξ _ = j = 1 3 Θ _ × j Ψ j ,
Θ _ ˜ ι + 1 = arg min Θ Y _ [ T _ ( i = 1 3 [ W _ Γ 2 ( Θ _ ) ] × i A i T ) ] × 3 1 T F 2 + μ 1 Θ _ ι Ω _ ι ν _ 1 ι F 2 + μ 2 Ξ _ ι j = 1 3 Θ _ ι × j Ψ j ν _ 2 ι F 2 ,
Ω _ ι + 1 = arg min Ω β Ω _ ι Ω _ F 2 + μ 1 Θ _ ι Ω _ ι ν _ 1 ι F 2 + λ Ω _ 1 ,
Ξ _ ι + 1 = arg min Ξ Ξ _ TV + μ 2 Ξ _ ι j = 1 3 Θ _ ι × j Ψ j ν _ 2 ι F 2 .
PSNR = 1 L i 3 = 1 L [ 10 log 10 ( max ( F ˜ : , : , i 3 ) 2 MSE ( F , F ˜ ) ) ] ,
SAM = 1 M N i 1 = 1 M i 2 = 1 N cos 1 ( i 3 = 1 L F i 1 , i 2 , i 3 F ˜ i 1 , i 2 , i 3 F : , : , i 3 2 F ˜ : , : , i 3 2 ) ,
y i 1 , , i n 1 , j , i n + 1 , , i T = i n N n x i 1 , , i n , , i T a j , i n ,
Θ _ = n = 1 T X _ × n A n ,
y = vec ( Y _ ) = vec [ n T X _ × n A n ] = A vec ( X _ ) ,

Metrics