Abstract

Coded apertures with random patterns are extensively used in compressive spectral imagers to sample the incident scene in the image plane. Random samplings, however, are inadequate to capture the structural characteristics of the underlying signal due to the sparsity and structure nature of sensing matrices in spectral imagers. This paper proposes a new approach for super-resolution compressive spectral imaging via adaptive coding. In this method, coded apertures are optimally designed based on a two-tone adaptive compressive sensing (CS) framework to improve the reconstruction resolution and accuracy of the hyperspectral imager. A liquid crystal tunable filter (LCTF) is used to scan the incident scene in the spectral domain to successively select different spectral channels. The output of the LCTF is modulated by the adaptive coded aperture patterns and then projected onto a low-resolution detector array. The coded aperture patterns are implemented by a digital micromirror device (DMD) with higher resolution than that of the detector. Due to the strong correlation across the spectra, the recovered images from previous spectral channels can be used as a priori information to design the adaptive coded apertures for sensing subsequent spectral channels. In particular, the coded apertures are constructed from the a priori spectral images via a two-tone hard thresholding operation that respectively extracts the structural characteristics of bright and dark regions in the underlying scenes. Super-resolution image reconstruction within a spectral channel can be recovered from a few snapshots of low-resolution measurements. Since no additional side information of the spectral scene is needed, the proposed method does not increase the system complexity. Based on the mutual-coherence criterion, the proposed adaptive CS framework is proved theoretically to promote the sensing efficiency of the spectral images. Simulations and experiments are provided to demonstrate and assess the proposed adaptive coding method. Finally, the underlying concepts are extended to a multi-channel method to compress the hyperspectral data cube in the spatial and spectral domains simultaneously.

© 2020 Chinese Laser Press

Corrections

Chang Xu, Tingfa Xu, Ge Yan, Xu Ma, Yuhan Zhang, Xi Wang, Feng Zhao, and Gonzalo R. Arce, "Super-resolution compressive spectral imaging via two-tone adaptive coding: publisher’s note," Photon. Res. 8, 892-892 (2020)
https://www.osapublishing.org/prj/abstract.cfm?uri=prj-8-6-892

7 April 2020: A typographical correction was made to the author affiliations.

1. INTRODUCTION

Hyperspectral imaging acquires the spatio-spectral data cube of a scene over dozens to hundreds of narrow spectral bands [1,2]. Benefiting from the abundant spectral information, hyperspectral imaging has been used in a diverse range of applications from precision agriculture [3], food safety [4], medical diagnosis [5], to mineral mapping [6], and so on. Based on the data cube acquisition modes, hyperspectral imaging approaches can be classified into four categories known as whiskbroom [7], pushbroom [8], snapshot [9], and staring [10] approaches. Specifically, whiskbroom and pushbroom approaches are based on scanning in pointwise and linewise fashions, respectively. Snapshot approaches are proposed to multiplex the three-dimensional (3D) data cube onto a two-dimensional (2D) sensor, which is able to preserve the 2D spatial information as images typically have underlying spatial properties. However, there is a trade-off between the scanning process and spatial resolution [11]. To address this issue, staring hyperspectral imagers are well defined to capture the 2D spatial information of the scene at once, and sequentially scan the spectra using a rotating filter wheel or a tunable filter [12]. As they flexibly select the interested spectral range, staring approaches have been widely used in the field of hyperspectral imaging [13].

Recently, liquid crystal tunable filters (LCTFs) have been used as spectral bandpass filters in staring hyperspectral imagers attributed to their advantages of simple design, versatility, low wavefront distortion, flexible throughput control, faster speed, and large aperture with wide field of view [1417]. However, the conventional LCTF-based hyperspectral imager is limited by the constant spatial resolution of its detector. In addition, high-resolution detectors may not be available in some cases [18]. Moreover, the substantial volume of hyperspectral images causes a dilemma for subsequent transmission and storage of the overall data [19]. The fast-emerging compressive sensing (CS) theory is well defined as a useful tool to reconstruct high-dimensional signals from much fewer multiplexed measurements than those required by the Nyquist sampling theorem, based on the sparse representation assumption [2022]. Therefore, high-resolution images can be recovered from measurements on the low-resolution detector using the CS method [23]. In addition, compressive data are obtained during the acquisition stage, requiring less volume of total measurements than the original data cube. This benefits data transmission and storage. With these motivations, Wang et al. applied CS to develop a super-resolution LCTF-based hyperspectral imaging approach [24]. In Ref. [24], the spectral images were modulated in the spatial domain by a high-resolution random coded aperture, and then projected onto a low-resolution detector. However, random coding is suboptimal due to the sparse sensing matrix of the system, and thus the reconstruction performance can be improved with proper code design. Remarkably, the coded aperture determines the structure of the projection matrix in compressive sampling; thus it plays an important role in improving the reconstruction accuracy of the hyperspectral data cube [25,26]. The design of coded apertures has therefore attracted wide attention from researchers, remaining an important open issue in this field [27,28].

In the past, a set of numerical approaches has been developed to optimize the coded apertures based on the restricted isometry property [29,30] or empirical design rules [31,32]. However, the coded apertures in these methods are pre-designed before the data acquisition and cannot self-adapt to the characteristics of the underlying signal. In addition, the optimization methods entail high computational loads to the design process of spectral imaging systems. Recently, side information has been used to exploit non-local similarity [33], structural sparsity [34], rank minimization [35], etc. [3639] to aid the reconstruction of undersampled signals. Learning from this concept, other approaches have been proposed to design the coded apertures based on a priori information of the underlying spectral images [40,41]. Nevertheless, these approaches require additional optical paths or auxiliary sensors to detect the side information of the spectral scene, which inevitably increase the complexity of the systems. In Ref. [42], Yang et al. proposed an adaptive CS sampling strategy based on the dictionary learned from the training data. However, the designed projection matrix in this method is not binary, which makes it difficult to implement in hardware.

This paper proposes a two-tone adaptive CS (TACS) framework that can be easily implemented by coded apertures to enhance the spatial resolution and image quality of compressive spectral imaging systems. The sketch of the proposed hyperspectral imager is illustrated in Fig. 1. The light rays emitted from the target are incident upon the LCTF, which is used to successively select different spectral channels. The LCTF is considered an ideal spectral filter with narrow bandwidth that outputs a monochromatic image corresponding to its center wavelength [43]. The output of the LCTF is collected by the imaging lens 1, and then spatially modulated by a high-resolution coded aperture that is generated according to the TACS method. In hardware, the coded aperture patterns are implemented by the digital micromirror device (DMD). The DMD consists of an array of micromirrors that are individually controllable to generate different binary coded patterns [4446]. Finally, the imaging lens 2 focuses the reflected light rays onto a low-resolution complementary metal oxide semiconductor (CMOS) detector. The CMOS detector is used to collect the compressive measurements, from which a high-resolution spectral image selected by the LCTF can be reconstructed. In order to obtain the 3D spectral data cube, the center wavelength of the LCTF needs to be tuned gradually to scan different spectral slices. Note that the proposed imager encodes the spectral images in the spatial domain, and thus spatial super-resolution is achieved while the number of acquired spectral channels remains the same. Because most of the hyperspectral images exhibit strong correlation among spectral channels, the spectra are assumed to be smooth [26,47]. Therefore, the recovered images from the previous spectral channels can be used as the reference images to construct the adaptive coded apertures for the following spectral channels. Given an a priori spectral image, a pair of complementary coded aperture patterns is generated by a reverse thresholding operation to respectively extract the structural characteristics of bright and dark regions. Since no extra side information of the spectral scene is needed, the proposed method does not increase the complexity of the imaging system. In addition, the computational complexity to calculate the adaptive coded apertures is negligible, compared to that of the conventional coded aperture optimization methods and the reconstruction process of the spectral images.

 figure: Fig. 1.

Fig. 1. Sketch of the LCTF-based hyperspectral imager.

Download Full Size | PPT Slide | PDF

In order to further improve the reconstruction performance, multiple snapshots are carried out to increase the number of measurements. In the data acquisition process, the center wavelength of the LCTF is first fixed, and then different coded aperture patterns are switched successively on the DMD. In this paper, different coded aperture patterns are generated from one reference image using the random dithering method [48]. It is worth highlighting that the proposed TACS framework is proved theoretically to be valid based on the mutual-coherence criterion. In a statistical sense, the projection matrix for TACS is demonstrated to improve the sensing efficiency of the underlying signal. Simulation and experimental results verify the superiority of the proposed TACS coding method in terms of the imaging performance over the random coding method. In addition, comparison with other adaptive coding methods is provided to further assess the TACS coding method.

It should be noted that for the TACS method, each measurement encompasses the information of one spectral channel; thus only spatial compression is conducted. In order to compress the hyperspectral data cube in both spatial and spectral domains, the underlying concepts are extended to a multi-channel TACS method. During one integration time interval of the detector, the LCTF is switched for several times to encompass a series of spectral channels into one snapshot. In each snapshot, different spectral channels are modulated by different coded aperture patterns, then multiplexed and integrated on the CMOS detector. By doing so, each measurement includes the information of multiple spectral channels, thus increasing the compression capacity of the system. A set of simulations is conducted to prove the feasibility of the proposed multi-channel TACS method.

The remainder of this paper is organized as follows. Section 2 introduces the TACS framework. The proposed two-tone adaptive compressive hyperspectral imaging system is described in Section 3. Simulation and experimental results of the TACS method are provided in Section 4 and Section 5, respectively. The multi-channel TACS method is proposed and assessed in Section 6. Section 7 concludes the paper with some remarks.

2. TWO-TONE ADAPTIVE CS FRAMEWORK

A. CS Principles

It is known that most natural signals or images are sparse in some representation basis. Suppose XRN×1 is a K-sparse signal in the basis Ψ=[ψ1,ψ2,,ψN]RN×N; thus, it can be expressed as X=ΨΘ, where Ψ is the sparse basis, and ΘRN×1 is the sparse coefficient vector including only K (KN) significant elements. Commonly used sparse bases include Fourier transform basis, discrete cosine basis (DCT), wavelet basis, and so on [49]. Let Y represent the compressive measurements of X given by Y=ΦX=ΦΨΘ, where Φ=[ϕ1,ϕ2,,ϕL]TRL×N is the projection matrix with LN. According to CS theory, the sparse signal X can be reconstructed from a few compressive measurements by solving the following inverse problem [20]:

Θ^=argminΘΘ1,s.t.Y=ΦX=ΦΨΘ,
where ·1 is the l1-norm, and Θ^ is the reconstructed coefficient vector. Over the past years, a large number of algorithms have been proposed to effectively solve the optimization problem in Eq. (1) [5052].

It then becomes natural to ask what properties the projection matrix Φ needs to satisfy to recover the signal successfully and accurately. Consider first the most general case where no a priori information of X is known. It has been shown that if Ψ is incoherent to Φ, X can be successfully recovered when the number of measurements satisfies L=C·K·logNN, where C1 is an oversampling factor [53]. The mutual coherence between Ψ and Φ can be evaluated by

μ=max{|ϕi,ψj|2},i=1,2,,Landj=1,2,,N,
where ϕi is the ith row of Φ, ψj is the jth column of Ψ, and the vectors ϕi and ψj are normalized to have unit energy. It is remarkable that random projection matrices satisfy the incoherent property with high probability for almost all sparse signals [54].

However, in some other scenarios, some a priori information of X is known beforehand. For instance, an approximate (not exact) observation of the original signal is available, which can be exploited to improve the coding and reconstruction performance. To this end, Ma et al. introduced a design rule for the projection matrices based on the approximate observation of original signals [55]. First, the columns of the basis Ψ are separated into two sets: ϒ={ψl(1),ψl(2),,ψl(K)} and ϒ¯, where l(i) indicates the location of the column corresponding to the ith non-zero coefficient, and ϒ¯ is the complementary set of ϒ. Accordingly, the mutual-coherence metric is divided into two parts: μϒ=maxψjϒ{|ϕi,ψj|2} and μϒ¯=maxψjϒ¯{|ϕi,ψj|2}. Given some a priori information of X, it shows that a good projection matrix should maximize the difference between μϒ and μϒ¯ [55].

B. TACS Projection Matrix

Monotone adaptive projection matrices were proposed in computational lithography to satisfy the aforementioned design rule, i.e., to maximize the difference between μϒ and μϒ¯ [55]. Suppose the observation of the original signal is given by

S=X+ε,
where εRN×1 is the noise vector. The elements of ε are independent identical random variables obeying the Gaussian distribution N(0,σX2) with zero-mean and variance σX2. The monotone adaptive projection matrix with ±1 elements can be constructed by thresholding the observation S. However, the negative elements in the projection matrix cannot be physically implemented by the coded aperture used in the imaging system. That is because transmissive or reflective coded apertures modulate the amplitude of the incident wavefront without changing the phase.

To overcome this limitation, this paper proposes a TACS projection matrix with non-negative elements. The rows of the projection matrix Φ are independently generated by applying two-tone thresholding operations on the observation S. Supposing the measurement number L is an even number, then the element of Φ in the ith row and jth column is defined as

ϕij={1+sgn(SjΛij)2Nif1iL21sgn(SjΛij)2NifL2<iL,
where sgn(·) is the sign operator, Sj is the jth element of S, and the threshold level Λij obeys the Gaussian distribution N(μΛ,σΛ2), where μΛ and σΛ2 are equal to the mean value and variance of S, respectively.

The projection matrix defined in Eq. (4) constitutes two sub-projection matrices with all elements equal to 0 or 1. Figure 2 provides an intuitive illustration of different projection matrices (N=401,L=100) for the one-dimensional signal in Fig. 2(a). Figure 2(b) shows the conventional random projection matrix with Bernoulli sampling. Figure 2(c) illustrates the TACS projection matrix. In Figs. 2(b) and 2(c), the white and black pixels have the values of 1 and 0, respectively. Note that the TACS projection matrix extracts some structural characteristics of the original signal in Fig. 2(a), while the random projection matrix does not. In particular, the TACS projection matrix includes two sub-matrices. The top-half and bottom-half sub-matrices respectively capture the structural characteristics of the signal components beyond and below the threshold values. Thus, the overall compressive measurements capture the features of the entire signal.

 figure: Fig. 2.

Fig. 2. Examples of different projection matrices (N=401,L=100) for the original signal shown in (a): (b) the random projection matrix and (c) the TACS projection matrix.

Download Full Size | PPT Slide | PDF

Next, we use the design rule described in Subsection 2.A to assess the merit of the TACS projection matrix. Note that the voxels in hyperspectral images represent light intensities and thus they are always non-negative. This property will be used in the proof. In Appendix A, the proposed TACS projection matrix is proved to make the mean values of μϒ and μϒ¯ satisfy the following properties:

μ¯ϒ=ΔmaxψjϒE{|ϕi,ψj|2}>X222NK2θmax2,μ¯ϒ¯=Δmaxψjϒ¯E{|ϕi,ψj|2}14N{[2μXπ(σΛ2+σX2)1]m=1Nψ^m}2,
where E{·} represents the mathematical expectation; θmax represents the maximum element in the coefficient vector Θ; and μX, σX2 correspond to the mean value and variance of X, respectively. ψ^m is the mth element in the vector ψ^ϒ¯ that maximizes the mathematical expectation. In Appendix B, we further prove that if Ψ is chosen as the 2D-inverse DCT (IDCT) basis, then Eq. (5) becomes
μ¯ϒ=maxψjϒE{|ϕi,ψj|2}>X222NK2θmax2,μ¯ϒ¯=maxψjϒ¯E{|ϕi,ψj|2}0.
Equation (6) indicates that the proposed TACS projection matrix can separate μϒ and μϒ¯ in the statistical sense, thus making it satisfy the design rule.

Next, the superiority of the TACS projection matrix is verified by numerical simulations. Two signals with different dimensions are used as the original signals to be measured. Table 1 compares the mean values of μϒ and μϒ¯ obtained by the random projection matrices and TACS projection matrices. For each projection method, we repeat the simulations 100 times. In contrast to the random projection matrices, the TACS projection matrices further increase the difference between μϒ and μϒ¯, and satisfy the design rule better. Thus, the TACS projection matrix is apt to retain more information of the original signal than the random projection matrix, and benefits in improving the reconstruction performance.

Tables Icon

Table 1. Comparison of Mean Values of μϒ and μϒ¯ Obtained by Random Projection Matrices and TACS Projection Matrices

3. SUPER-RESOLUTION HYPERSPECTRAL IMAGER USING TACS CODED APERTURE

As shown in Fig. 1, the LCTF-based hyperspectral imager consists of an LCTF, a DMD, and a CMOS detector. The input spatio-spectral data cube, f1(x,y,λ), is scanned in the spectral domain by tuning the center wavelength of the LCTF. This paper assumes that the LCTF is an ideal spectral filter, the output of which is considered a monochromatic image corresponding to the center wavelength of the LCTF. Suppose the transmission function of the LCTF is denoted as Tsnλ(λ). The output monochromatic image is expressed as f1nλ(x,y)=f0(x,y,λ)·Tsnλ(λ), where the superscript nλ(nλ=1,2,,Nλ) represents the order number of the spectral channel. Then, the spectral images passing through the LCTF are spatially modulated by the binary coded aperture patterns, which are realized by the DMD. The DMD consists of an array of micromirrors. The tilt angle of each micromirror can be independently adjusted to change the direction of the reflected light rays. The block/unblock coded aperture patterns can be generated by flipping the corresponding micromirrors, and only the light rays reflected from the unblock pixels are collected to the main light path. The compressive measurement obtained on the detector plane can be written as

gnλ(x,y)=f1nλ(x,y)·Tc(x,y)dxdy=f0(x,y,λ)·Tsnλ(λ)·Tc(x,y)dxdy,
where Tc(x,y) represents the transmittance of the coded aperture.

In order to further improve the hyperspectral imaging performance, multiple snapshots are taken to increase the number of measurements. As shown in Fig. 3, the center wavelength of the LCTF is first fixed to select a certain spectral channel. In each spectral channel, the coded aperture pattern is switched for L times to capture L different compressive measurements. Afterwards, we tune the center wavelength of the LCTF to the next spectral channel and repeat the measurement process mentioned above. After scanning all the spectral channels, the measurement procedure is terminated. In this case, the coded aperture pattern is denoted as Tcl(x,y), where l=1,2,,L indexes the order number of snapshots. Thus, the lth compressive measurement in the nλth spectral channel, referred to as gnλ,l(x,y), is formulated as

gnλ,l(x,y)=f0(x,y,λ)·Tsnλ(λ)·Tcl(x,y)·dxdy.

 figure: Fig. 3.

Fig. 3. Sequential scan of the spectral channels and compressive measurements using multiple coded snapshots. Note the coarse resolution of the detector array compared with that of the coded aperture.

Download Full Size | PPT Slide | PDF

To simplify the analysis of the system, the imaging model in Eq. (8) is reformulated into a discrete form. The hyperspectral data cube is gridded into Nx×Ny×Nλ voxels. Each voxel is denoted as Fnx,ny,nλ, where nx,ny represent the spatial coordinates (nx=1,2,,Nx and ny=1,2,,Ny). The requirement for super-resolution is that the pitch resolution of the DMD should be higher than that of the CMOS detector. Specifically, let Δc and Δd be the pitch sizes on the coded aperture and the detector, respectively. Then, the ratio of resolution between the coded aperture and the detector is defined as R=Δd/Δc. Assume R is a positive integer larger than 1. That is, the dimension of the coded aperture is Nx×Ny, and the dimension of the detector is Mx×My, where Nx/Mx=Ny/My=R. Thus, the overall compression ratio is defined as γo=γc·L=(1/R)2·L, where γc refers to the compression ratio for one snapshot. In the lth snapshot, the measurement on the (mx,my)th pixel of the detector is represented by Gmx,mynλ,l. The discrete version of the imaging model in Eq. (8) can be written as

Gmx,mynλ,l=nx=R(mx1)+1Rmxny=R(my1)+1RmyFnx,ny,nλ·Tnx,ny,nλnλ,l,
where mx=1,2,,Mx, my=1,2,,My, and Tnx,ny,nλnλ,l denotes the discrete transmittance of the LCTF and coded aperture corresponding to the voxel (nx,ny,nλ).

Alternatively, the hyperspectral data cube can be expressed as its vector representation across Nλ spectral channels. Let fnλRNx·Ny×1 denote monochromatic image in the nλth spectral channel, which is sparse in a basis ΨR(Nx·Ny)×(Nx·Ny), such that fnλ=ΨΘnλ. Assume gnλRL·Mx·My×1 is the vectorized representation of the measurement G in the nλth spectral channel. Following this notation, the imaging model in Eq. (9) can be rewritten as

gnλ=Φnλfnλ+ρnλ=ΦnλΨΘnλ+ρnλ,
where ρnλRL·Mx·My×1 represents the measurement noise. ΦnλR(L·Mx·My)×(Nx·Ny) is the spatial transmission matrix of the coded aperture for the nλth spectral channel, and it is constructed from the following:
Φnλ=[Φnλ1000Φnλ2000ΦnλMx·My],
where ΦnλiRL×R2(i=1,2,,Mx·My) denotes the transmission matrix corresponding to the ith detector pixel, and 0RL×R2 represents the zero matrix.

Thus, the coefficient vector of the hyperspectral image in the nλth spectral channel can be reconstructed from the following inverse optimization problem:

Θ^nλ=argminΘnλΘnλ1,s.t.gnλΦnλΨΘnλ2β,
where ·2 is the l2-norm, and β is the bound of noise. Due to the advantage in reconstruction quality, the gradient projection for sparse reconstruction (GPSR) algorithm is used to solve the optimization problem [50]. Other reconstruction algorithms proposed in the CS realm, such as the two-step iterative shrinkage/thresholding (TwIST) algorithm [51] and the sparse reconstruction by separable approximation (SpaRSA) algorithm [52], can also be used.

Next, we describe how to generate the TACS coded apertures during the measurement process. Because the hyperspectral images exhibit strong correlation among spectral channels, that is, the spectra are smooth, the images in the adjacent spectral channels should have similar structural characteristics [56]. Therefore, we can divide the entire spectra into several sub-groups, each of which includes several adjacent spectral channels. As shown in Fig. 4, for each sub-group, one reference spectral channel is selected and reconstructed using a random coded aperture. After that, the reconstructed reference image is used as the a priori information to construct the TACS coded aperture for all spectral channels in this sub-group.

 figure: Fig. 4.

Fig. 4. Method to generate the TACS coded apertures for the hyperspectral imaging systems.

Download Full Size | PPT Slide | PDF

To explain this process more clearly, let us take a special case, i.e., Mx·My=1. The images within the Nuth and Nvth spectral channels belonging to the same sub-group are denoted by INuRNx×Ny and INvRNx×Ny, respectively. Assume the Nuth spectral channel is the reference channel. Then, the reconstruction of INu denoted by I^Nu is used as the a priori information to design the TACS coded aperture for the spectral image INv. Define an operator vec(·) that transforms an image into its vectorized representation by stacking all the columns. Due to the similarity between the adjacent spectral images, we have vec(I^Nu)vec(INv)+ε, where ε denotes the error term. Compared to Eq. (3), vec(INv) and vec(I^Nu) can be regarded as the original signal X and its approximate observation S, respectively. Let N=Nx·Ny, and we can generate L different adaptive projection vectors with length of N according to Eq. (4). After that, these projection vectors are inversely stacked into L different 2D TACS coded aperture patterns with dimension of Nx×Ny. In practice, we can reconstruct every R×R sub-region on the spectral images independently, and then stitch them up to obtain the full spectral images. In this case, Φnλi in Eq. (11) is generated in accordance with the same process as mentioned above. Each row of Φnλi is denoted by ϕnλi,lR1×R2, which represents the transmission function for the ith spatial pixel on the detector in the lth snapshot. That is, Φnλi=[(ϕnλi,1)T,(ϕnλi,2)T,(ϕnλi,L)T]T. Then, ϕnλi,l can be stacked into an R×R matrix denoted by Πnλi,l. Note that Πnλi,l represents the R×R sub-region on the coded aperture for the lth snapshot. Stitching up the sub-matrices Πnλ1,l,Πnλ2,l,,ΠnλMx·My,l together, we can obtain the lth coded aperture pattern.

4. SIMULATION RESULTS

A. Simulation Implementation of the TACS Coding Method

In this subsection, two sets of simulations are conducted using the real hyperspectral data, where the reconstruction performance obtained by TACS coded apertures and random coded apertures is compared. In the following simulations, the Lego figures are used as the target, and the original hyperspectral data cube consists of 256×256 pixels in the spatial domain and 24 spectral bands. More specifically, the center wavelengths of the 24 spectral bands are 450 nm, 459 nm, 467 nm, 476 nm, 485 nm, 493 nm, 502 nm, 511 nm, 520 nm, 528 nm, 537 nm, 546 nm, 554 nm, 563 nm, 572 nm, 580 nm, 589 nm, 598 nm, 607 nm, 615 nm, 624 nm, 633 nm, 641 nm, and 650 nm, respectively. The intensity of the test data is normalized to the range of [0,1]. The coded aperture includes 256×256 pixels, while the detector constitutes 32×32 pixels. Thus, every 8×8 pixels on the coded aperture correspond to one detector pixel and γc=(1/8)2. The measurement error on the detector is emulated by the white Gaussian noise with signal-to-noise ratio (SNR) level of 30 dB. In the reconstruction process, the 2D-IDCT basis is used to sparsely represent the spectral images. All of the calculations are carried out using MATLAB R2016a on a server with Intel Xeon E3-1505M v5 2.8 GHz processor, and 64 GB memory.

According to Section 3, we first divide the entire spectra evenly into four sub-groups and reconstruct one reference spectral channel in each sub-group. At this point, the number of snapshots L is set to 32. Figures 5(a) and 5(b) show an original reference image and its reconstruction result using a random coded aperture. Taking the recovered reference image as the a priori information, the TACS coded aperture patterns are generated for all spectral channels in the same sub-group. A pair of the TACS coded aperture patterns is shown in Figs. 5(c) and 5(d). It can be easily observed that these two coded aperture patterns are approximately complementary, and respectively extract the structural information within bright and dark regions. Next, the TACS coded apertures are used to modulate and reconstruct the spectral images with multiple snapshots (L=12). Thus, the overall compression ratio is γo=0.1875.

 figure: Fig. 5.

Fig. 5. (a) Original reference image; (b) the recovered image using a random coded aperture; and (c), (d) a pair of the TACS coded aperture patterns generated based on (b).

Download Full Size | PPT Slide | PDF

Figure 6 presents the spectral images obtained by the conventional system and the reconstructed spectral images obtained by different kinds of coded apertures for comparison. Figure 6(a) illustrates the high-resolution original images of four spectral channels in the data cube. Figure 6(b) shows the low-resolution images captured by the conventional system, which cannot achieve super-resolution without using the theory of CS. The low-resolution images are simulated by downsampling the original images along both horizontal and vertical directions with a scaling factor of 8, and their spatial resolution is 32×32 pixels. And they are presented to demonstrate the resolution improvement brought by the compressive imaging system. Figures 6(c) and 6(d) show the reconstructed spectral images consisting of 256×256 pixels using TACS coded apertures and random coded apertures, respectively. Note that in Fig. 6(d), the spectral images are reconstructed with 12 snapshots to make a fair comparison. The peak SNRs (PSNRs) of the reconstructed images are also presented in the figure. The proposed TACS coded apertures are shown to effectively improve the reconstruction quality compared to the random coded apertures under the same compression ratio. In order to clearly illustrate the improvement, the specific regions around the eyes are magnified in Fig. 6. Figure 7 plots the original and reconstructed spectral signatures for three representative points on the target. The three representative points are selected from different colorful regions, as shown in Fig. 7(a). The intensities in Figs. 7(b)7(d) indicate the normalized values of the spectra at those three spatial points. The TACS coded apertures are proved to achieve superior fidelity of the spectral reconstructions over the random coded apertures. It shows that these reconstructed spectral signatures are smooth, which implies that the proposed method is reasonable to use the reconstructed images from previous spectral channels as a priori information.

 figure: Fig. 6.

Fig. 6. (a) Original spectral images, (b) the simulated low-resolution images obtained by conventional system, (c) the reconstructed spectral images using TACS coded apertures, and (d) the reconstructed spectral images using random coded apertures. Magnified details are presented as well.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. (a) RGB image of the scene, and (b)–(d) the original and reconstructed spectral signatures for three representative points, indicated by P1, P2, and P3 in (a).

Download Full Size | PPT Slide | PDF

B. Impact of Key Factors on the Reconstruction Performance

The impact of some key factors on the reconstruction performance is studied in this subsection. Three key factors considered are the number of sub-groups, the compression ratio, and the reconstruction algorithm. Figure 8(a) illustrates the average reconstructed PSNRs in all spectral channels using the TACS method with different sub-group numbers. In addition, the PSNR curve obtained by random coded apertures is also provided for comparison. For each curve, the reconstruction simulations are repeated three times, and the average PSNRs of the reconstructed images are calculated. In general, the quality of the reconstructed images is enhanced when more sub-groups are used. Increasing the number of sub-groups will decrease the number of adjacent spectral channels in each sub-group. Thus, the similarity among the spectral images within each sub-group will also be improved. Since all spectral images in a sub-group are modulated and reconstructed based on one reference image, increasing the number of sub-groups will improve the reconstruction accuracy. However, dividing the entire spectra into more sub-groups will also increase the computational complexity. That is because one reference spectral channel is required to be reconstructed using a random coded aperture for each sub-group. Given the number of sub-groups Sg, we need to reconstruct Sg reference spectral channels. Moreover, different sub-groups use different sets of adaptive coded apertures. Then Sg adaptive coded apertures need to be generated. Thus, the computational complexity to generate the adaptive coded apertures is proportional to the number of sub-groups.

 figure: Fig. 8.

Fig. 8. Influence of three key factors on the reconstruction performance. (a) The average reconstructed PSNRs in all spectral channels using different sub-group numbers, (b) the curves of average reconstructed PSNRs with respect to different compression ratios, and (c) the average PSNRs and (d) runtime corresponding to different reconstruction algorithms.

Download Full Size | PPT Slide | PDF

Afterwards, we repeat the simulations in Fig. 6 using different compression ratios. The curves of average reconstructed PSNRs over the four spectral channels with respect to different compression ratios are shown in Fig. 8(b). As can be noticed, the TACS coded apertures outperform the random coded apertures in the reconstruction performance under different compression ratios. In addition, the advantage of TACS coded apertures becomes more obvious as the compression ratio reduces. That is because the TACS coded apertures can extract the structural characteristics of the target and retain more information of the spectral data cube in the compressive measurements. For instance, when the compression ratio reduces to 0.1, the TACS coded apertures obtain 5.16 dB gain on the average PSNR over the random coded apertures.

Furthermore, simulations are conducted using the TACS coded apertures to compare the reconstruction quality and computational efficiency of different reconstruction algorithms. The settings of the simulation are the same as those in Fig. 6. For each algorithm, we repeat the reconstructions three times, and calculate the average PSNRs and runtime. Figures 8(c) and 8(d) present the average PSNRs and runtime corresponding to the GPSR algorithm, TwIST algorithm, and SpaRSA algorithm, respectively. It is observed that the GPSR algorithm can achieve the best reconstruction performance. Although the computational efficiency of the GPSR algorithm is lower than that of TwIST and SpaRSA algorithms, it is still fast to handle the reconstruction problem. The TwIST algorithm leads to the shortest runtime, but the reconstruction performance is much worse than that of the other two algorithms, especially in the spectral channels with shorter wavelengths.

C. Comparison with Other Adaptive Coding Methods

This subsection provides the simulation results of two other adaptive coding methods to further illustrate the superiority of the TACS coded apertures. In Ref. [40], Galvis et al. estimated the edge of the scene using the image obtained from a red–green–blue (RGB) sensor and subsequently designed the structured coded aperture. They applied two different blue-noise coded patterns to the edge and non-edge regions in order to improve the reconstruction quality of the feature boundaries. Yang et al. proposed an adaptive sampling strategy for hyperspectral images based on dictionary learning in Ref. [42]. Specifically, they first learned the over-complete dictionary from the training data, and then computed a singular value decomposition from the dictionary. And a small number of left singular vectors were used eventually as the rows of the projection matrix.

Based on the foundation of their work, we simulate these two methods using our proposed framework. Adaptive coded patterns are generated by replacing the RGB images in Galvis’s method and the training data in Yang’s method with the recovered images from previous spectral channels. Figures 9(a) and 9(b) respectively present examples of the coded patterns generated by Galvis’s method and Yang’s method.

 figure: Fig. 9.

Fig. 9. Examples of coded patterns generated by (a) Galvis’s method and (b) Yang’s method.

Download Full Size | PPT Slide | PDF

Next, the data cubes are recovered from a small set of measurements, according to the same process and settings as Subsection 4.A. Figure 10(a) illustrates the original images of four spectral channels in the data cube. The reconstructed spectral images using the TACS coding method, Galvis’s method, and Yang’s method are respectively shown in Figs. 10(b)10(d). The average PSNRs for the entire reconstructed data cubes using Galvis’s and Yang’s methods are 27.64 dB and 30.39 dB, respectively. As can be noticed, the reconstruction performance using Galvis’s method is inferior to the TACS coding method. That is because the content information of the RGB image in the non-edge regions is not fully utilized. In addition, the proposed TACS coding method outperforms Yang’s method. But the gap is not so obvious since the dictionary is also adaptively learned from the a priori information in Yang’s method. However, the process of dictionary learning is time-consuming and the coded patterns in Yang’s method are not binary, thus making it inapplicable for hardware implementation.

 figure: Fig. 10.

Fig. 10. (a) Original spectral images in four spectral channels, (b) the reconstructed spectral images using the TACS coding method, (c) the reconstructed spectral images using Galvis’s method, and (d) the reconstructed spectral images using Yang’s method.

Download Full Size | PPT Slide | PDF

5. EXPERIMENTAL RESULTS

This section demonstrates the proposed TACS coded apertures on an experimental testbed of a staring hyperspectral imager developed by our group. As shown in Fig. 11, the testbed consists of an RL127-WHI-IC broadband ring light source (Edmund), a VNIR-5/20-20-S LCTF (Wayho Technology), two AC254-050-A imaging lenses (Thorlabs) with a focal length of 50.2 mm, a DLP9500 DMD (Texas Instruments), and an acA2040-90μm monochromatic CMOS camera (Basler) to capture the measurements. The CMOS camera consists of 1024×1024 pixels with a pitch size of 11.0 μm. The DMD includes 1920×1080 micromirrors with a pitch size of 10.8 μm.

 figure: Fig. 11.

Fig. 11. Testbed of the staring hyperspectral imager with the proposed TACS coded apertures.

Download Full Size | PPT Slide | PDF

The system should be calibrated beforehand. During the calibration process, the distances between optical components were adjusted to obtain one-to-one correspondence between the coded aperture pixels and detector pixels. In addition, the effect of dark noise was eliminated, and the non-uniformity of the light source was corrected. Taking into account the effects of system impulse response and stray light, the coded aperture patterns were first measured and recorded when the target was replaced by a diffuse plate. Then, the normalized detected patterns were regarded as the actual transmission functions of the coded apertures. When all of the micromirrors on the DMD were turned on, the measurements of the CMOS detector without spatial coding were considered the original images of the target.

As described in Subsection 4.A, the entire spectra are evenly divided into four sub-groups. The real target under detection is shown in Fig. 12(a). In the measurement stage, a resized region including 400×400 pixels on the DMD was used to implement the coded apertures. Figure 12(b) illustrates an example of random coded aperture patterns used to reconstruct the reference spectral image, and Figs. 12(c) and 12(d) illustrate the examples of TACS coded aperture patterns generated according to the reference image. Compared with the random coded apertures, TACS coded apertures extract the structural characteristics of the scene effectively. To reconstruct the reference spectral channels, 32 snapshots were carried out using random coded apertures. Then, 12 snapshots were taken with the TACS coded aperture patterns to reconstruct all of the spectral images within the same sub-group. The center wavelength of the LCTF was tuned for 24 times in total to obtain 24 spectral bands, from 515 nm to 630 nm with an interval of 5 nm. In order to emulate the low-resolution detector, all 8×8 pixels on the detector were combined into one macro-pixel, and then 50×50 macro-pixels were used to acquire the measurement data.

 figure: Fig. 12.

Fig. 12. (a) RGB image of the target used in the experiment; (b) an example of random coded aperture patterns; and (c), (d) the examples of TACS coded aperture patterns.

Download Full Size | PPT Slide | PDF

As a comparison, we also built the testbed of the conventional spectral imaging system without using CS theory. In the conventional system, all the micromirrors on the DMD were set to unblock, so that the incident light was directly reflected to the main optical path. In order to acquire the data cube, the LCTF was switched for 24 times to obtain 24 spectral images, each of which consists of 50×50 pixels. For the conventional system, the time to collect the full data is 10 s, while it costs 166 s to acquire all compressive measurements of 24 spectral bands using 12 TACS coded aperture patterns. That is because multiple snapshots are taken to improve the reconstruction quality of super-resolution images for the TACS method.

Figure 13 illustrates the spectral images obtained by different methods using the experimental testbed. The original images in four spectral channels are presented in Fig. 13(a), and the spatial resolution of these images is 400×400 pixels. Figure 13(b) shows the images containing 50×50 pixels acquired by the conventional system. The conventional system does not utilize the CS theory; thus its spatial resolution is determined by the low-resolution detector. The reconstructed spectral images using TACS coded apertures and random coded apertures both with 12 snapshots are respectively shown in Figs. 13(c) and 13(d). The corresponding PSNRs of the reconstructed images are also presented in the figure. The average PSNRs for the entire reconstructed data cube using TACS and random coded apertures are 23.80 dB and 20.59 dB, respectively. To make the comparison more clear, the error patterns of the recovered images with respect to the original images are illustrated in Figs. 13(e) and 13(f). In more detail, Figs. 13(e) and 13(f) show the color maps of error patterns corresponding to the TACS coded apertures and random coded apertures, respectively. Every pixel in the error pattern represents the absolute intensity error of the corresponding pixel between the reconstructed image and the original image. In addition, the mean square errors of all error patterns are presented in Fig. 13. It is observed that the TACS coded apertures achieve superior reconstruction quality over the random coded apertures. Appendix C provides the comparison among these different methods using the experimental testbed in detail.

 figure: Fig. 13.

Fig. 13. (a) Original images in four spectral channels, (b) the low-resolution images obtained by the conventional system, (c) the reconstructed images obtained by TACS coded apertures, and (d) the reconstructed images obtained by random coded apertures. The color maps of the error patterns corresponding to (e) TACS coded apertures and (f) random coded apertures.

Download Full Size | PPT Slide | PDF

Figure 14 plots the original and reconstructed spectra of two representative points on the target located at P1 and P2 in Fig. 12(a). The spectra are characterized by the reflectances, which are obtained through spectral calibration and have no units. And the original spectral reflectances are measured by an HR4000 grating spectrometer (Ocean Optics). It is shown that the reconstructed spectra using the TACS coded apertures are more consistent with the original spectra. As illustrated in Fig. 14, the spectra are smooth, which confirms the correlation characteristic among the neighboring spectral channels again.

 figure: Fig. 14.

Fig. 14. Original and reconstructed spectra for two representative points indicated by P1 and P2 as shown in Fig. 12(a).

Download Full Size | PPT Slide | PDF

6. MULTI-CHANNEL TACS METHOD

In the above sections, only one spectral channel is captured during one integration time interval of the detector. This section proposes a multi-channel TACS method to introduce the spectral compression and maintain the reconstruction quality without changing the structure of the imaging system. During one integration time interval of the detector, the LCTF is tuned for several times to encompass a series of spectral channels into one snapshot, each of which is modulated by different coded aperture patterns loaded on the DMD. Then, the coded images in these spectral channels are projected and integrated on the CMOS detector. In this way, spectral images in multiple channels, rather than just one channel, are involved in each measurement, thereby increasing the compression capacity of the system. Different from Section 3, the overall compression ratio is determined by

γom=γo·1Q=LR2·Q,
where Q represents the number of times to switch the LCTF during each integration time interval, and Q<Nλ. Compared to the single-channel method described in Section 3, the compression capacity of the system is increased by Q-fold.

Suppose the spectral images within the N1th,N2th,,NQth channels are collected during one integration time interval, and all of them belong to the same sub-group. Then, the imaging model based on the multi-channel method is reformulated as follows:

gm=Φmfm+ρm=ΦmΨmΘm+ρm,
where gm represents the measurements using the multi-channel method; fm is the raster-scanned vector of the spectral images in those NQ channels, i.e., fm=[(fN1)T,(fN2)T,,(fNQ)]TRQ·Nx·Ny×1; Ψm=diag(ΨN1,ΨN2,,ΨNQ)R(Q·Nx·Ny)×(Q·Nx·Ny); ρmRL·Q·Mx·My×1 denotes the measurement noise; and ΦmR(L·Mx·My)×(Q·Nx·Ny) can be expressed as
Φm=[Φm1000Φm2000ΦmMx·My],
where 0 represents a zero matrix of order L×QR2. It is noted that ΦmiRL×QR2(i=1,2,,Mx·My) is constructed as
Φmi=[ΦN1i,ΦN2i,,ΦNQi],
where ΦNji(j=1,2,,Q) refers to the transmission sub-matrix for the Njth spectral channel. As mentioned in Section 3, all spectral channels in the same sub-group share the same set of TACS coded aperture patterns. That means ΦNji remains unchanged within the same sub-group. However, in the multi-channel method, spectral images in different channels need to be modulated by different coded apertures. So for the Njth channel, the corresponding ΦNji is respectively generated according to Section 3. Thus, fm can be reconstructed by solving the problem in Eq. (14). After that, we can obtain the spectral images in different channels by decomposing fm into fN1,fN2,,fNQ.

Next, we provide the simulation results of the proposed multi-channel method. The reference spectral channels are obtained and reconstructed as described in Subsection 4.A. And all parameters are the same as those used in Subsection 4.A. In particular, we capture three spectral channels during each integration time interval, that is Q=3. Thus, the overall compression ratio is γom=0.0625, and the measurement number is reduced to one third of that used for the single-channel method. Figure 15(a) shows the original spectral images with center wavelengths of 554 nm, 563 nm, and 572 nm. The simulated low-resolution images obtained by the conventional system are shown in Fig. 15(b) to illustrate the improvement in spatial resolution. The reconstructed spectral images using the multi-channel method are presented in Fig. 15(c). Figures 15(d) and 15(e) illustrate the reconstructed spectral images using the single-channel method with TACS coded apertures and random coded apertures, respectively. The average PSNR for the entire reconstructed data cube using the multi-channel method is 28.98 dB. Compared to the single-channel method with random coded apertures, the proposed multi-channel method can improve the reconstruction quality. However, the reconstruction performance is inferior to the single-channel method with TACS coded apertures, since the reduction of measurements will degrade the image quality. It can be concluded that the proposed multi-channel method can improve the compression capacity of the system, while maintaining the reconstruction quality to a certain extent. In Appendix D, a side-by-side comparison of the three coding methods is summarized. In the future, experiments will be done to verify the multi-channel method.

 figure: Fig. 15.

Fig. 15. (a) Original spectral images with center wavelengths of 554 nm, 563 nm, and 572 nm; (b) the simulated low-resolution images obtained by the conventional system; (c) the reconstructed spectral images using the proposed multi-channel method, and the reconstructed spectral images using the single-channel method with (d) TACS coded apertures and (e) random coded apertures.

Download Full Size | PPT Slide | PDF

7. CONCLUSION

This paper developed a novel TACS coding method in spectral imaging and demonstrated it based on the LCTF-based hyperspectral imager for the first time. CS theory was used to obtain the high-resolution hyperspectral data cube that can be recovered from a small set of measurements on the low-resolution detector. Meanwhile, the proposed TACS coded apertures can achieve superior reconstruction performance over the random coded apertures, since the TACS coded apertures can capture the structural characteristics of the underlying target. Moreover, it was proven that the TACS coding method satisfies the mutual-coherence metric better than the traditional random coding method. Using the proposed TACS coding, an improvement up to 4.81 dB on the average reconstructed PSNR is observed in the simulations. In addition, simulation results of other adaptive coding methods are provided for further comparison. Then experiments on the testbed were carried out and the superiority of the proposed TACS coded apertures was demonstrated. Finally, the multi-channel TACS method was proposed to compress the hyperspectral data cube in the spatial and spectral domains simultaneously. The developed super-resolution staring hyperspectral imager was shown to provide promising image quality using the cost-effective low-resolution detectors.

APPENDIX A: PROOF OF EQ. (5)

The proof of the first inequality in Eq. (5) is as follows:

μ¯ϒ=maxψjϒE{|ϕi,ψj|2}>1θmax2maxψjϒE{|ϕi,ψjθj|2}>1K2θmax2maxψjϒE{|ϕi,j=1Kψjθj|2}>1LK2θmax2E{i=1L|ϕi,X|2}=1LK2θmax2E{||ΦX||22},
where θj is the jth element in the coefficient vector Θ. In the above equation,
ΦX22=14N{i=1L/2{j=1N[sgn(SjΛi,j)+1]Xj}2+i=L/2+1L{j=1N[1sgn(SjΛi,j)]Xj}2}.
For each i, define
(Δi)1=[j=1Nsgn(SjΛi,j)Xj]2+(j=1NXj)2+2j=1Nsgn(SjΛi,j)Xj·r=1NXr,iL/2,
(Δi)2=[j=1Nsgn(SjΛi,j)Xj]2+(j=1NXj)22j=1Nsgn(SjΛi,j)Xj·r=1NXr,L/2<iL.
Then, we can get
E{ΦX22}=E{14N[i=1L/2(Δi)1+i=L/2+1L(Δi)2]}=14N{L2E[(Δi)1]+L2E[(Δi)2]}=L8N{E[(Δi)1]+E[(Δi)2]}.
In addition, we define
T1=XrXj[Pr(Λi,r<Xr)Pr(Λi,j<Xj)+Pr(Λi,r>Xr)Pr(Λi,j>Xj)],T2=XrXj[Pr(Λi,r>Xr)Pr(Λi,j<Xj)+Pr(Λi,r<Xr)Pr(Λi,j>Xj)],
where Λi,j=(Λi,jεj)N(μ,σΛ2+σX2), and Pr(·) means the probability of the argument. Then, we can calculate the mathematical expectation of (Δi)1 and (Δi)2 as
E[(Δi)1]=2X22+r=1Nj=1,jrN(T1T2)+r=1Nj=1,jrNXrXj+2(r=1NXr)·j=1NXj[Pr(Λi,j<Xj)Pr(Λi,j>Xj)],
E[(Δi)2]=2X22+r=1Nj=1,jrN(T1T2)+r=1Nj=1,jrNXrXj+2(r=1NXr)·j=1NXj[Pr(Λi,j<Xj)Pr(Λi,j>Xj)].
Thus, Eq. (A7) and Eq. (A8) can be written as
E[(Δi)1]=2X22+2j=1NXj2[12Q(Xj/σΛ2+σX2)]+2r=1Nj=1,jrNXrXj[12Q(Xr/σΛ2+σX2)]+r=1Nj=1,jrNXrXj[12Q(Xr/σΛ2+σX2)2Q(Xj/σΛ2+σX2)+4Q(Xr/σΛ2+σX2)Q(Xj/σΛ2+σX2)]+r=1Nj=1,jrNXrXj,
E[(Δi)2]=2X222j=1NXj2[12Q(Xj/σΛ2+σX2)]2r=1Nj=1,jrNXrXj[12Q(Xr/σΛ2+σX2)]+r=1Nj=1,jrNXrXj[12Q(Xr/σΛ2+σX2)2Q(Xj/σΛ2+σX2)+4Q(Xr/σΛ2+σX2)Q(Xj/σΛ2+σX2)]+r=1Nj=1,jrNXrXj,
where X=XμX, and the Q-function is defined as Q(x)=x+(1/2π)·exp(t2/2)dt.

Substituting Eq. (A9) and Eq. (A10) into Eq. (A5), we can get

E[(Δi)1]+E[(Δi)2]=4X22+2r=1Nj=1,jrNXrXj[12Q(Xj/σΛ2+σX2)2Q(Xr/σΛ2+σX2)+4Q(Xj/σΛ2+σX2)Q(Xr/σΛ2+σX2)]+2r=1Nj=1,jrNXrXj.
Let τj=Q(Xj/σΛ2+σX2), and then τj(0,1). The term inside the brackets in Eq. (A11) can be abbreviated as 12τr2τj+4τrτj=(12τr)(12τj). It is easy to prove that
12τr2τj+4τrτj>1.
As stated in Section 2, the elements in X are non-negative, so we have E[(Δi)1]+E[(Δi)2]>4X22. Then, we have E{ΦX22}>L2NX22. Substituting this inequality into Eq. (A1), we get
μ¯ϒ=maxψjϒE{|ϕi,ψj|2}>X222NK2θmax2.
The proof of the second approximate equality in Eq. (5) is as follows. Let ϕ^ and ψ^ be the vectors that maximize the mathematical expectation, and Λ^ is the corresponding threshold. Then, we need to discuss the following two cases.

First case: if ϕij=[1+sgn(SjΛij)]/(2N), then we have

μ¯ϒ¯=14NE{|[sgn(XΛ^)+1],ψ^|2}=14NE{p=1N[sgn(XpΛ^p)+1]ψ^p}2=14Nm=1Nn=1Nψ^mψ^n·E{[sgn(XmΛ^m)+1][sgn(XnΛ^n)+1]}=14Nm=1Nn=1Nψ^mψ^n·[44Q(Xm/σΛ2+σX2)4Q(Xn/σΛ2+σX2)+4Q(Xm/σΛ2+σX2)Q(Xn/σΛ2+σX2)].
When the argument of the Q-function is much smaller than 1, we have
Q(x)1212πx.
Based on Eq. (A15) and the assumptions of |Xm/σΛ2+σX2|1 and |Xn/σΛ2+σX2|1,
μ¯ϒ¯=maxψjϒ¯E{|ϕi,ψj|2}14N[m=1Nn=1Nψ^mψ^n+m=1Nn=1Nψ^n2Xmψ^mπ(σΛ2+σX2)+m=1Nn=1Nψ^m2Xnψ^nπ(σΛ2+σX2)+m=1Nn=1N2Xmψ^mXnψ^nπ(σΛ2+σX2)].
Due to ψjϒ¯, it means that ψj is orthogonal to X+μX. Hence,
μ¯ϒ¯14N[m=1Nn=1Nψ^mψ^n22μXπ(σΛ2+σX2)m=1Nn=1Nψ^mψ^n+2μX2π(σΛ2+σX2)m=1Nn=1Nψ^mψ^n]=14N{[2μXπ(σΛ2+σX2)1]m=1Nψ^m}2.
Second case: if ϕij=[1sgn(SjΛij)]/(2N), according to the same derivation as the first case, we can get
μ¯ϒ¯14N{[2μXπ(σΛ2+σX2)+1]m=1Nψ^m}2.
In conclusion,
μ¯ϒ¯14N{[2μπ(σΛ2+σX2)1]m=1Nψ^m}2.

APPENDIX B: PROOF OF EQ. (6)

For the 2D image IRNx×Ny, we have vec(I)=ΨΘ. Compared to Eq. (3), vec(I) can be regarded as the original signal X. If the 2D-IDCT basis is employed as the sparse basis, then ΨR(Nx·Ny)×(Nx·Ny) is given by Ψ=Ω1Ω2, where Ω1RNx×Nx and Ω2RNy×Ny are the 1D-IDCT transformation matrices, and represents the Kronecker product.

For convenience, assume that Nx=Ny=n and n2=N. In particular, if NxNy, we can take the square block on the image and perform IDCT transformation on it in sequence. Now denote the 1D-IDCT basis Ω=[ω1,ω2,,ωn]Rn×n; thus we can formulate the 2D-IDCT basis ΨRN×N as

Ψ=ΩΩ=[ω11Ωω1nΩωn1ΩωnnΩ].
From the above equation, the sum of each column of Ψ can be written by
i=1Nψij=k=1nl=1nωlj/nωk(jj/n·n),
where ψij is the element of Ψ in the ith row and jth column.

Therefore, we should first analyze each element in the 1D-IDCT basis before discussing the sum of ψij. The element of Ω in the kth row and jth column can be expressed as

ωkj={1nifj=12ncos(2k1)(j1)π2nifj>1.
When j>1,
k=1ncos(2k1)(j1)π2n=0.
The proof of the above equation is presented below. Denote a=j1 is a positive integer, and then k=1ncos(2k1)(j1)π2n=k=1ncos(2k1)aπ2n. According to the Euler’s formula, the equation can be written as
k=1ncos(2k1)aπ2n=Re{k=1nexp[(2k1)aπi2n]}=Re[exp(aπi2n)k=1nexp(kaπin)],
where Re(·) denotes the real part of a complex number, and exp(kaπin)(k=1,2,n) is a geometric progression with common ratio of exp(aπin). Then, the sum of the geometric progression is
k=1nexp(kaπin)=exp(aπin)1[exp(aπin)]n1exp(aπin)=exp(aπin)1exp(aπin)1exp(aπin).
If a is an even number, exp(aπi)=1. Substituting Eq. (B6) into Eq. (B5), then Eq. (B5) equals zero. If a is an odd number, exp(aπi)=1. Substituting Eq. (B6) into Eq. (B5), then the part in the brackets of the last term in Eq. (B5) equals
exp(aπi2n)k=1nexp(kaπin)=exp(aπi2n)·exp(aπin)1exp(aπi)1exp(aπin)=exp(aπi2n)21exp(aπin).
Using the Euler’s formula again, Eq. (B7) can be written as exp(aπi2n)k=1nexp(kaπin)=i/sinaπ2n.

The real part of exp(aπi2n)k=1nexp(kaπin) is zero, which means k=1ncos(2k1)aπ2n=0. So,

i=1nωij=0,forj>1.
Substituting Eq. (B3) and Eq. (B8) into Eq. (B2), we find that
i=1Nψij={Nifj=10ifj>1.
When ψjϒ¯, it means θj=ψjX=0. Since every element in X and ψ1 is non-negative, it is easy to know when j=1, the corresponding sparse coefficient θ1=ψ1X>0, then ψ1ϒ¯. Therefore, we can draw a conclusion that
j=1Nψj=0ifψjϒ¯.
Substituting Eq. (B10) into Eq. (5), then we can derive Eq. (6).

APPENDIX C: SUMMARY OF THE COMPARISON AMONG DIFFERENT METHODS USING THE EXPERIMENTAL TESTBED

Table 2 provides a side-by-side comparison among different methods using the experimental testbed in Section 5, including the conventional system, and the proposed system using random coded apertures and TACS coded apertures. The premise of calculating the PSNR is that the sizes of the images stay identical. However, the spectral images acquired by the conventional system and the proposed system are of different sizes, thus the PSNR is not given here. Instead, PSNRs of the reconstructed images corresponding to the original high-resolution images are computed. For the conventional system, it takes 24 snapshots to tune the LCTF to obtain 24 spectral channels. While 24×12=288 snapshots are required for the proposed system using random coded apertures. In particular, for the TACS coding method, additional 4×32=128 snapshots are required to obtain four reference spectral images beforehand, each of which is modulated by 32 random coded aperture patterns. Although the number of snapshots for the TACS method has increased, we can use a detector of 50×50 pixels to acquire images of 400×400 pixels. In addition, the reconstruction quality is improved compared to random coded apertures. Note that the time to collect the full data for using the TACS method includes the time to observe and reconstruct four reference channels, as well as the time to calculate four sets of TACS coded apertures corresponding to the four sub-groups. The optical efficiency of the conventional system is mainly determined by the transmittance of the LCTF. For compressive imaging systems, the optical efficiency is also affected by the transmittance of the DMD. The ratio of block/unblock micromirrors is almost 1:1 for both TACS and random coded aperture patterns. Thus, the transmittance of the DMD is about 50%, and the optical efficiency of the proposed compressive system using TACS coded apertures and random coded apertures is almost the same. Furthermore, it is about 50% of that of the conventional system. But the difference will not be huge, because the LCTF has a relatively low transmittance due to an inherent defect of the narrowband filter device.

Tables Icon

Table 2. Comparison Among Different Methods Using the Experimental Testbed, Including the Conventional System, and Our System Using Random Coded Apertures and TACS Coded Apertures

APPENDIX D: SUMMARY OF THE COMPARISON AMONG THE THREE CODING METHODS

Table 3 summarizes the comparison among the three coding methods, including the single-channel random coding method, single-channel TACS coding method, and multi-channel TACS coding method. In Section 6, the original data cube consists of 24 spectral images with spatial resolution of 256×256 pixels, and 12 snapshots are used to obtain the measurements using different coding methods. For the single-channel method with random coded apertures and TACS coded apertures, the number of snapshots has been introduced in Appendix C. As for the multi-channel TACS method, three spectral channels are captured during each integration time interval, and 24/3×12=96 snapshots are required. By adding additional 4×32=128 snapshots to obtain four reference spectral images, the total number becomes 224. As shown in Table 3, the multi-channel TACS method has reduced the number of snapshots and the compression ratio, since it introduces the spectral compression. Although the reconstruction performance is inferior to the single-channel method with TACS coded apertures, it is still superior to the single-channel random coding method, while the latter does not conduct spectral compression.

Tables Icon

Table 3. Comparison Among the Three Coding Methods, Including the Single-Channel Random Coding Method, Single-Channel TACS Coding Method, and Multi-Channel TACS Coding Method

Funding

National Natural Science Foundation of China (61371132, 61471043, 61527802); International S&T Cooperation Program of China (2014DFR10960).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. D. Bannon, “Hyperspectral imaging: cubes and slices,” Nat. Photonics 3, 627–629 (2009). [CrossRef]  

2. S. R. Domingue, D. G. Winters, and R. A. Bartels, “Hyperspectral imaging via labeled excitation light and background-free absorption spectroscopy,” Optica 2, 929–932 (2015). [CrossRef]  

3. D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004). [CrossRef]  

4. A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007). [CrossRef]  

5. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014). [CrossRef]  

6. F. A. Kruse, J. W. Boardman, and J. F. Huntington, “Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens. 41, 1388–1400 (2003). [CrossRef]  

7. R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998). [CrossRef]  

8. R. Basedow, D. Carmer, and M. Anderson, “HYDICE system, implementation and performance,” Proc. SPIE 2480, 258–267 (1995). [CrossRef]  

9. B. K. Ford, M. R. Descour, and R. M. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express 9, 444–453 (2001). [CrossRef]  

10. F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006). [CrossRef]  

11. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39, 2044–2047 (2014). [CrossRef]  

12. J. J. Puschell, “Hyperspectral imagers for current and future missions,” Proc. SPIE 4041, 121–132 (2000). [CrossRef]  

13. Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013). [CrossRef]  

14. S. C. Gebhart, R. C. Thompson, and A. Mahadevan-Jansen, “Liquid-crystal tunable filter spectral imaging for brain tumor demarcation,” Appl. Opt. 46, 1896–1910 (2007). [CrossRef]  

15. S. J. Woltman, G. D. Jay, and G. P. Crawford, “Liquid-crystal materials find a new order in biomedical applications,” Nat. Mater. 6, 929–938 (2007). [CrossRef]  

16. M. D. Evans, C. Thai, and J. C. Grant, “Development of a spectral imaging system based on a liquid crystal tunable filter,” Trans. ASAE 41, 1845–1852 (1998). [CrossRef]  

17. O. Aharon and I. Abdulhalim, “Liquid crystal lyot tunable filter with extended free spectral range,” Opt. Express 17, 11426–11433 (2009). [CrossRef]  

18. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52, D46–D54 (2013). [CrossRef]  

19. R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014). [CrossRef]  

20. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). [CrossRef]  

21. E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006). [CrossRef]  

22. L. Zhang, D. Liang, B. Li, Y. Kang, Z. Pan, D. Zhang, and X. Ma, “Study on the key technology of spectral reflectivity reconstruction based on sparse prior by a single-pixel detector,” Photon. Res. 4, 115–121 (2016). [CrossRef]  

23. H. Arguello, H. F. Rueda, and G. R. Arce, “Spatial super-resolution in code aperture spectral imaging,” Proc. SPIE 8365, 83650A (2012). [CrossRef]  

24. X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26, 25226–25243 (2018). [CrossRef]  

25. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014). [CrossRef]  

26. A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013). [CrossRef]  

27. H. Arguello and G. R. Arce, “Code aperture optimization for spectrally agile compressive imaging,” J. Opt. Soc. Am. A 28, 2400–2413 (2011). [CrossRef]  

28. L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019). [CrossRef]  

29. H. Arguello and G. R. Arce, “Restricted isometry property in coded aperture compressive spectral imaging,” in IEEE Statistical Signal Processing Workshop (SSP) (IEEE, 2012), pp. 716–719.

30. H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. Image Process. 23, 1896–1908 (2014). [CrossRef]  

31. Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Trans. Image Process. 19, 264–270 (2010). [CrossRef]  

32. J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014). [CrossRef]  

33. L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017). [CrossRef]  

34. J. Zhang, D. Zhao, F. Jiang, and W. Gao, “Structural group sparse representation for image compressive sensing recovery,” in Data Compression Conference (IEEE, 2013), pp. 331–340.

35. Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

36. X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015). [CrossRef]  

37. X. Wang and J. Liang, “Side information-aided compressed sensing reconstruction via approximate message passing,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 3330–3334.

38. J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with side information: geometrical interpretation and performance bounds,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, 2014), pp. 512–516.

39. J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015). [CrossRef]  

40. L. Galvis, D. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Appl. Opt. 56, 6332–6340 (2017). [CrossRef]  

41. X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018). [CrossRef]  

42. M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016). [CrossRef]  

43. M. A. López-Alvarez, J. Hernández-Andrés, and J. Romero, “Developing an optimum computer-designed multispectral system comprising a monochrome CCD camera and a liquid-crystal tunable filter,” Appl. Opt. 47, 4381–4390 (2008). [CrossRef]  

44. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2018). [CrossRef]  

45. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014). [CrossRef]  

46. H. Partanen, A. Friberg, T. Setälä, and J. Turunen, “Spectral measurement of coherence Stokes parameters of random broadband light beams,” Photon. Res. 7, 669–677 (2019). [CrossRef]  

47. Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 3727–3736.

48. R. A. Ulichney, “Dithering with blue noise,” Proc. IEEE 76, 56–79 (1988). [CrossRef]  

49. E. J. Candés and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies,” IEEE Trans. Inf. Theory 52, 5406–5425 (2006). [CrossRef]  

50. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007). [CrossRef]  

51. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007). [CrossRef]  

52. S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57, 2479–2493 (2009). [CrossRef]  

53. E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. 23, 969–985 (2007). [CrossRef]  

54. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).

55. X. Ma, D. Shi, Z. Wang, Y. Li, and G. R. Arce, “Lithographic source optimization based on adaptive projection compressive sensing,” Opt. Express 25, 7131–7149 (2017). [CrossRef]  

56. H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. D. Bannon, “Hyperspectral imaging: cubes and slices,” Nat. Photonics 3, 627–629 (2009).
    [Crossref]
  2. S. R. Domingue, D. G. Winters, and R. A. Bartels, “Hyperspectral imaging via labeled excitation light and background-free absorption spectroscopy,” Optica 2, 929–932 (2015).
    [Crossref]
  3. D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
    [Crossref]
  4. A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
    [Crossref]
  5. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
    [Crossref]
  6. F. A. Kruse, J. W. Boardman, and J. F. Huntington, “Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens. 41, 1388–1400 (2003).
    [Crossref]
  7. R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
    [Crossref]
  8. R. Basedow, D. Carmer, and M. Anderson, “HYDICE system, implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
    [Crossref]
  9. B. K. Ford, M. R. Descour, and R. M. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express 9, 444–453 (2001).
    [Crossref]
  10. F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
    [Crossref]
  11. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39, 2044–2047 (2014).
    [Crossref]
  12. J. J. Puschell, “Hyperspectral imagers for current and future missions,” Proc. SPIE 4041, 121–132 (2000).
    [Crossref]
  13. Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
    [Crossref]
  14. S. C. Gebhart, R. C. Thompson, and A. Mahadevan-Jansen, “Liquid-crystal tunable filter spectral imaging for brain tumor demarcation,” Appl. Opt. 46, 1896–1910 (2007).
    [Crossref]
  15. S. J. Woltman, G. D. Jay, and G. P. Crawford, “Liquid-crystal materials find a new order in biomedical applications,” Nat. Mater. 6, 929–938 (2007).
    [Crossref]
  16. M. D. Evans, C. Thai, and J. C. Grant, “Development of a spectral imaging system based on a liquid crystal tunable filter,” Trans. ASAE 41, 1845–1852 (1998).
    [Crossref]
  17. O. Aharon and I. Abdulhalim, “Liquid crystal lyot tunable filter with extended free spectral range,” Opt. Express 17, 11426–11433 (2009).
    [Crossref]
  18. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52, D46–D54 (2013).
    [Crossref]
  19. R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014).
    [Crossref]
  20. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
    [Crossref]
  21. E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
    [Crossref]
  22. L. Zhang, D. Liang, B. Li, Y. Kang, Z. Pan, D. Zhang, and X. Ma, “Study on the key technology of spectral reflectivity reconstruction based on sparse prior by a single-pixel detector,” Photon. Res. 4, 115–121 (2016).
    [Crossref]
  23. H. Arguello, H. F. Rueda, and G. R. Arce, “Spatial super-resolution in code aperture spectral imaging,” Proc. SPIE 8365, 83650A (2012).
    [Crossref]
  24. X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26, 25226–25243 (2018).
    [Crossref]
  25. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
    [Crossref]
  26. A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
    [Crossref]
  27. H. Arguello and G. R. Arce, “Code aperture optimization for spectrally agile compressive imaging,” J. Opt. Soc. Am. A 28, 2400–2413 (2011).
    [Crossref]
  28. L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019).
    [Crossref]
  29. H. Arguello and G. R. Arce, “Restricted isometry property in coded aperture compressive spectral imaging,” in IEEE Statistical Signal Processing Workshop (SSP) (IEEE, 2012), pp. 716–719.
  30. H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. Image Process. 23, 1896–1908 (2014).
    [Crossref]
  31. Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Trans. Image Process. 19, 264–270 (2010).
    [Crossref]
  32. J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014).
    [Crossref]
  33. L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017).
    [Crossref]
  34. J. Zhang, D. Zhao, F. Jiang, and W. Gao, “Structural group sparse representation for image compressive sensing recovery,” in Data Compression Conference (IEEE, 2013), pp. 331–340.
  35. Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.
  36. X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
    [Crossref]
  37. X. Wang and J. Liang, “Side information-aided compressed sensing reconstruction via approximate message passing,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 3330–3334.
  38. J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with side information: geometrical interpretation and performance bounds,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, 2014), pp. 512–516.
  39. J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
    [Crossref]
  40. L. Galvis, D. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Appl. Opt. 56, 6332–6340 (2017).
    [Crossref]
  41. X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018).
    [Crossref]
  42. M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016).
    [Crossref]
  43. M. A. López-Alvarez, J. Hernández-Andrés, and J. Romero, “Developing an optimum computer-designed multispectral system comprising a monochrome CCD camera and a liquid-crystal tunable filter,” Appl. Opt. 47, 4381–4390 (2008).
    [Crossref]
  44. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2018).
    [Crossref]
  45. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
    [Crossref]
  46. H. Partanen, A. Friberg, T. Setälä, and J. Turunen, “Spectral measurement of coherence Stokes parameters of random broadband light beams,” Photon. Res. 7, 669–677 (2019).
    [Crossref]
  47. Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 3727–3736.
  48. R. A. Ulichney, “Dithering with blue noise,” Proc. IEEE 76, 56–79 (1988).
    [Crossref]
  49. E. J. Candés and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies,” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
    [Crossref]
  50. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007).
    [Crossref]
  51. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
    [Crossref]
  52. S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57, 2479–2493 (2009).
    [Crossref]
  53. E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. 23, 969–985 (2007).
    [Crossref]
  54. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).
  55. X. Ma, D. Shi, Z. Wang, Y. Li, and G. R. Arce, “Lithographic source optimization based on adaptive projection compressive sensing,” Opt. Express 25, 7131–7149 (2017).
    [Crossref]
  56. H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
    [Crossref]

2019 (2)

L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019).
[Crossref]

H. Partanen, A. Friberg, T. Setälä, and J. Turunen, “Spectral measurement of coherence Stokes parameters of random broadband light beams,” Photon. Res. 7, 669–677 (2019).
[Crossref]

2018 (3)

X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018).
[Crossref]

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2018).
[Crossref]

X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26, 25226–25243 (2018).
[Crossref]

2017 (3)

2016 (2)

M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016).
[Crossref]

L. Zhang, D. Liang, B. Li, Y. Kang, Z. Pan, D. Zhang, and X. Ma, “Study on the key technology of spectral reflectivity reconstruction based on sparse prior by a single-pixel detector,” Photon. Res. 4, 115–121 (2016).
[Crossref]

2015 (3)

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

S. R. Domingue, D. G. Winters, and R. A. Bartels, “Hyperspectral imaging via labeled excitation light and background-free absorption spectroscopy,” Optica 2, 929–932 (2015).
[Crossref]

2014 (8)

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39, 2044–2047 (2014).
[Crossref]

J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014).
[Crossref]

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. Image Process. 23, 1896–1908 (2014).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
[Crossref]

2013 (3)

Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52, D46–D54 (2013).
[Crossref]

A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
[Crossref]

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

2012 (1)

H. Arguello, H. F. Rueda, and G. R. Arce, “Spatial super-resolution in code aperture spectral imaging,” Proc. SPIE 8365, 83650A (2012).
[Crossref]

2011 (1)

2010 (1)

Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Trans. Image Process. 19, 264–270 (2010).
[Crossref]

2009 (3)

O. Aharon and I. Abdulhalim, “Liquid crystal lyot tunable filter with extended free spectral range,” Opt. Express 17, 11426–11433 (2009).
[Crossref]

D. Bannon, “Hyperspectral imaging: cubes and slices,” Nat. Photonics 3, 627–629 (2009).
[Crossref]

S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57, 2479–2493 (2009).
[Crossref]

2008 (1)

2007 (6)

M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007).
[Crossref]

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. 23, 969–985 (2007).
[Crossref]

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

S. C. Gebhart, R. C. Thompson, and A. Mahadevan-Jansen, “Liquid-crystal tunable filter spectral imaging for brain tumor demarcation,” Appl. Opt. 46, 1896–1910 (2007).
[Crossref]

S. J. Woltman, G. D. Jay, and G. P. Crawford, “Liquid-crystal materials find a new order in biomedical applications,” Nat. Mater. 6, 929–938 (2007).
[Crossref]

2006 (5)

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).

E. J. Candés and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies,” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[Crossref]

2004 (1)

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
[Crossref]

2003 (1)

F. A. Kruse, J. W. Boardman, and J. F. Huntington, “Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens. 41, 1388–1400 (2003).
[Crossref]

2001 (1)

2000 (1)

J. J. Puschell, “Hyperspectral imagers for current and future missions,” Proc. SPIE 4041, 121–132 (2000).
[Crossref]

1998 (2)

M. D. Evans, C. Thai, and J. C. Grant, “Development of a spectral imaging system based on a liquid crystal tunable filter,” Trans. ASAE 41, 1845–1852 (1998).
[Crossref]

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

1995 (1)

R. Basedow, D. Carmer, and M. Anderson, “HYDICE system, implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

1988 (1)

R. A. Ulichney, “Dithering with blue noise,” Proc. IEEE 76, 56–79 (1988).
[Crossref]

Abdulhalim, I.

Adler, D. P.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Aharon, O.

Anderson, M.

R. Basedow, D. Carmer, and M. Anderson, “HYDICE system, implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

Arce, G. R.

X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26, 25226–25243 (2018).
[Crossref]

X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018).
[Crossref]

L. Galvis, D. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Appl. Opt. 56, 6332–6340 (2017).
[Crossref]

X. Ma, D. Shi, Z. Wang, Y. Li, and G. R. Arce, “Lithographic source optimization based on adaptive projection compressive sensing,” Opt. Express 25, 7131–7149 (2017).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. Image Process. 23, 1896–1908 (2014).
[Crossref]

H. Arguello, H. F. Rueda, and G. R. Arce, “Spatial super-resolution in code aperture spectral imaging,” Proc. SPIE 8365, 83650A (2012).
[Crossref]

H. Arguello and G. R. Arce, “Code aperture optimization for spectrally agile compressive imaging,” J. Opt. Soc. Am. A 28, 2400–2413 (2011).
[Crossref]

Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Trans. Image Process. 19, 264–270 (2010).
[Crossref]

H. Arguello and G. R. Arce, “Restricted isometry property in coded aperture compressive spectral imaging,” in IEEE Statistical Signal Processing Workshop (SSP) (IEEE, 2012), pp. 716–719.

Arguello, H.

L. Galvis, D. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Appl. Opt. 56, 6332–6340 (2017).
[Crossref]

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. Image Process. 23, 1896–1908 (2014).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

H. Arguello, H. F. Rueda, and G. R. Arce, “Spatial super-resolution in code aperture spectral imaging,” Proc. SPIE 8365, 83650A (2012).
[Crossref]

H. Arguello and G. R. Arce, “Code aperture optimization for spectrally agile compressive imaging,” J. Opt. Soc. Am. A 28, 2400–2413 (2011).
[Crossref]

H. Arguello and G. R. Arce, “Restricted isometry property in coded aperture compressive spectral imaging,” in IEEE Statistical Signal Processing Workshop (SSP) (IEEE, 2012), pp. 716–719.

Aronsson, M.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

August, Y.

Bannon, D.

D. Bannon, “Hyperspectral imaging: cubes and slices,” Nat. Photonics 3, 627–629 (2009).
[Crossref]

Baraniuk, R. G.

R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014).
[Crossref]

Bartels, R. A.

Basedow, R.

R. Basedow, D. Carmer, and M. Anderson, “HYDICE system, implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

Best, F. A.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Bingham, G. E.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Bioucas-Dias, J. M.

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

Boardman, J. W.

F. A. Kruse, J. W. Boardman, and J. F. Huntington, “Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens. 41, 1388–1400 (2003).
[Crossref]

Bowman, R.

Brady, D.

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

Brady, D. J.

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
[Crossref]

Candés, E. J.

E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

E. J. Candés and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies,” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[Crossref]

Candès, E.

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. 23, 969–985 (2007).
[Crossref]

Candès, E. J.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).

Cantwell, G. W.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Carin, L.

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
[Crossref]

Carmer, D.

R. Basedow, D. Carmer, and M. Anderson, “HYDICE system, implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

Chippendale, B. J.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Chovit, C. J.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Chrien, T. G.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Crawford, G. P.

S. J. Woltman, G. D. Jay, and G. P. Crawford, “Liquid-crystal materials find a new order in biomedical applications,” Nat. Mater. 6, 929–938 (2007).
[Crossref]

Cullen, P.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Dai, Q.

Davenport, M. A.

R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014).
[Crossref]

Debes, C.

J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014).
[Crossref]

Deligiannis, N.

J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with side information: geometrical interpretation and performance bounds,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, 2014), pp. 512–516.

Descour, M. R.

Domingue, S. R.

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

Downey, G.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Duarte, M. F.

R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014).
[Crossref]

Eastwood, M. L.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Edgar, M. P.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2018).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Ellington, S. D.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Elwell, J. D.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Evans, M. D.

M. D. Evans, C. Thai, and J. C. Grant, “Development of a spectral imaging system based on a liquid crystal tunable filter,” Trans. ASAE 41, 1845–1852 (1998).
[Crossref]

Fan, Y.

M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016).
[Crossref]

Faust, J. A.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Fei, B.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

Figueiredo, M. A. T.

S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57, 2479–2493 (2009).
[Crossref]

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007).
[Crossref]

Ford, B. K.

Frias, J.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Friberg, A.

Fu, Y.

L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019).
[Crossref]

Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 3727–3736.

Galvis, L.

Gao, W.

J. Zhang, D. Zhao, F. Jiang, and W. Gao, “Structural group sparse representation for image compressive sensing recovery,” in Data Compression Conference (IEEE, 2013), pp. 331–340.

Gebhart, S. C.

Gibson, G. M.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2018).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Gowen, A.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Grant, J. C.

M. D. Evans, C. Thai, and J. C. Grant, “Development of a spectral imaging system based on a liquid crystal tunable filter,” Trans. ASAE 41, 1845–1852 (1998).
[Crossref]

Green, R. O.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Guo, F.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

Haboudane, D.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
[Crossref]

Hahn, J.

J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014).
[Crossref]

He, W.

H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
[Crossref]

He, X.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

Hernández-Andrés, J.

Hoog, F.

M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016).
[Crossref]

Hu, W.

M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016).
[Crossref]

Huang, H.

L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019).
[Crossref]

Huang, X.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Huntington, J. F.

F. A. Kruse, J. W. Boardman, and J. F. Huntington, “Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens. 41, 1388–1400 (2003).
[Crossref]

Jay, G. D.

S. J. Woltman, G. D. Jay, and G. P. Crawford, “Liquid-crystal materials find a new order in biomedical applications,” Nat. Mater. 6, 929–938 (2007).
[Crossref]

Jiang, F.

J. Zhang, D. Zhao, F. Jiang, and W. Gao, “Structural group sparse representation for image compressive sensing recovery,” in Data Compression Conference (IEEE, 2013), pp. 331–340.

Kang, Y.

Kittle, D.

A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
[Crossref]

Kittle, D. S.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

Knuteson, R. O.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Kruse, F. A.

F. A. Kruse, J. W. Boardman, and J. F. Huntington, “Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens. 41, 1388–1400 (2003).
[Crossref]

Lau, D.

Leigsnering, M.

J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014).
[Crossref]

Li, B.

Li, Q.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

Li, Y.

Liang, D.

Liang, J.

X. Wang and J. Liang, “Side information-aided compressed sensing reconstruction via approximate message passing,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 3330–3334.

Liao, X.

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

Lin, X.

Liu, H.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

Liu, X.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Liu, Y.

Llull, P.

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

López-Alvarez, M. A.

Lu, G.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

Lynch, R. M.

Ma, X.

Mahadevan-Jansen, A.

Mao, T.

X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018).
[Crossref]

Miller, J. R.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
[Crossref]

Mitchell, K. J.

Mota, J. F. C.

J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with side information: geometrical interpretation and performance bounds,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, 2014), pp. 512–516.

Nowak, R. D.

S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57, 2479–2493 (2009).
[Crossref]

M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007).
[Crossref]

O’Donnell, C.

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Olah, M. R.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Padgett, M. J.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2018).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Pan, Z.

Partanen, H.

Pattey, E.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
[Crossref]

Pavri, B. E.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Puschell, J. J.

J. J. Puschell, “Hyperspectral imagers for current and future missions,” Proc. SPIE 4041, 121–132 (2000).
[Crossref]

Radwell, N.

Rajwade, A.

A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
[Crossref]

Revercomb, H. E.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Rivenson, Y.

Rodrigues, M. R. D.

J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with side information: geometrical interpretation and performance bounds,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, 2014), pp. 512–516.

Romberg, J.

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. 23, 969–985 (2007).
[Crossref]

E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

Romberg, J. K.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).

Romero, J.

Rueda, H. F.

H. Arguello, H. F. Rueda, and G. R. Arce, “Spatial super-resolution in code aperture spectral imaging,” Proc. SPIE 8365, 83650A (2012).
[Crossref]

Sapiro, G.

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

Sarture, C. M.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Sato, I.

Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 3727–3736.

Sato, Y.

Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 3727–3736.

Scott, D. K.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Setälä, T.

Shen, H.

H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
[Crossref]

Shi, D.

Shi, G.

L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017).
[Crossref]

Shi, H.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Smith, W. L.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Solis, M.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Stern, A.

Strachan, I. B.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
[Crossref]

Tang, L.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Tao, T.

E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).

E. J. Candés and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies,” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[Crossref]

Taylor, J. K.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Thai, C.

M. D. Evans, C. Thai, and J. C. Grant, “Development of a spectral imaging system based on a liquid crystal tunable filter,” Trans. ASAE 41, 1845–1852 (1998).
[Crossref]

Thielman, D. J.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Thompson, R. C.

Tobin, D. C.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Tsai, T.

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
[Crossref]

Turunen, J.

Ulichney, R. A.

R. A. Ulichney, “Dithering with blue noise,” Proc. IEEE 76, 56–79 (1988).
[Crossref]

Vachman, C.

Wang, L.

L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019).
[Crossref]

L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017).
[Crossref]

Wang, Q.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Wang, X.

X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26, 25226–25243 (2018).
[Crossref]

X. Wang and J. Liang, “Side information-aided compressed sensing reconstruction via approximate message passing,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 3330–3334.

Wang, Y.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

Wang, Z.

Werner, M. W.

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Wetzstein, G.

Willett, R. M.

R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014).
[Crossref]

Williams, O.

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

Winters, D. G.

Woltman, S. J.

S. J. Woltman, G. D. Jay, and G. P. Crawford, “Liquid-crystal materials find a new order in biomedical applications,” Nat. Mater. 6, 929–938 (2007).
[Crossref]

Wright, S. J.

S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57, 2479–2493 (2009).
[Crossref]

M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007).
[Crossref]

Wu, F.

L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017).
[Crossref]

Xiong, Z.

L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017).
[Crossref]

Xu, D.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

Xu, T.

X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26, 25226–25243 (2018).
[Crossref]

X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018).
[Crossref]

Xu, Y.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Yang, J.

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

Yang, M.

M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016).
[Crossref]

Yuan, Q.

H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
[Crossref]

Yuan, X.

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

Zarco-Tejada, P. J.

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
[Crossref]

Zeng, W.

L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017).
[Crossref]

Zha, Z.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Zhang, D.

Zhang, H.

X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018).
[Crossref]

H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
[Crossref]

Zhang, J.

J. Zhang, D. Zhao, F. Jiang, and W. Gao, “Structural group sparse representation for image compressive sensing recovery,” in Data Compression Conference (IEEE, 2013), pp. 331–340.

Zhang, L.

L. Zhang, D. Liang, B. Li, Y. Kang, Z. Pan, D. Zhang, and X. Ma, “Study on the key technology of spectral reflectivity reconstruction based on sparse prior by a single-pixel detector,” Photon. Res. 4, 115–121 (2016).
[Crossref]

H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
[Crossref]

Zhang, T.

L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019).
[Crossref]

Zhang, X.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

Zhang, Y.

Zhao, D.

J. Zhang, D. Zhao, F. Jiang, and W. Gao, “Structural group sparse representation for image compressive sensing recovery,” in Data Compression Conference (IEEE, 2013), pp. 331–340.

Zheng, Y.

Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 3727–3736.

Zhu, R.

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

Zoubir, A. M.

J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014).
[Crossref]

Appl. Opt. (4)

Commun. Pure Appl. Math. (1)

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).

Digit. Signal Process. (1)

J. Hahn, C. Debes, M. Leigsnering, and A. M. Zoubir, “Compressive sensing and adaptive direct sampling in hyperspectral imaging,” Digit. Signal Process. 26, 113–126 (2014).
[Crossref]

IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing (1)

M. Yang, F. Hoog, Y. Fan, and W. Hu, “Adaptive sampling by dictionary learning for hyperspectral imaging,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sensing 9, 4501–4509 (2016).
[Crossref]

IEEE J. Sel. Top. Signal Process. (2)

M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007).
[Crossref]

X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

IEEE Signal Process. Mag. (2)

R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31, 116–126 (2014).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (2)

F. A. Kruse, J. W. Boardman, and J. F. Huntington, “Comparison of airborne hyperspectral data and EO-1 hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens. 41, 1388–1400 (2003).
[Crossref]

H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014).
[Crossref]

IEEE Trans. Image Process. (5)

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Trans. Image Process. 24, 106–119 (2015).
[Crossref]

L. Wang, T. Zhang, Y. Fu, and H. Huang, “HyperReconNet: joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging,” IEEE Trans. Image Process. 28, 2257–2270 (2019).
[Crossref]

H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. Image Process. 23, 1896–1908 (2014).
[Crossref]

Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Trans. Image Process. 19, 264–270 (2010).
[Crossref]

IEEE Trans. Inf. Theory (3)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

E. J. Candés and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies,” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2104–2111 (2017).
[Crossref]

IEEE Trans. Signal Process. (1)

S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57, 2479–2493 (2009).
[Crossref]

Inverse Probl. (1)

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. 23, 969–985 (2007).
[Crossref]

J. Biomed. Opt. (2)

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19, 010901 (2014).
[Crossref]

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18, 100901 (2013).
[Crossref]

J. Opt. Soc. Am. A (1)

Nat. Mater. (1)

S. J. Woltman, G. D. Jay, and G. P. Crawford, “Liquid-crystal materials find a new order in biomedical applications,” Nat. Mater. 6, 929–938 (2007).
[Crossref]

Nat. Photonics (2)

D. Bannon, “Hyperspectral imaging: cubes and slices,” Nat. Photonics 3, 627–629 (2009).
[Crossref]

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2018).
[Crossref]

Opt. Express (4)

Opt. Lett. (1)

Optica (2)

Photon. Res. (2)

Proc. IEEE (1)

R. A. Ulichney, “Dithering with blue noise,” Proc. IEEE 76, 56–79 (1988).
[Crossref]

Proc. SPIE (5)

X. Ma, H. Zhang, X. Ma, G. R. Arce, T. Xu, and T. Mao, “Snapshot compressive spectral imaging based on adaptive coded apertures,” Proc. SPIE 10658, 1065803 (2018).
[Crossref]

H. Arguello, H. F. Rueda, and G. R. Arce, “Spatial super-resolution in code aperture spectral imaging,” Proc. SPIE 8365, 83650A (2012).
[Crossref]

R. Basedow, D. Carmer, and M. Anderson, “HYDICE system, implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

J. J. Puschell, “Hyperspectral imagers for current and future missions,” Proc. SPIE 4041, 121–132 (2000).
[Crossref]

F. A. Best, H. E. Revercomb, D. C. Tobin, R. O. Knuteson, J. K. Taylor, D. J. Thielman, D. P. Adler, M. W. Werner, S. D. Ellington, J. D. Elwell, D. K. Scott, G. W. Cantwell, G. E. Bingham, and W. L. Smith, “Performance verification of the geosynchronous imaging fourier transform spectrometer (GIFTS) on-board blackbody calibration system,” Proc. SPIE 6405, 64050I (2006).
[Crossref]

Remote Sens. Environ. (2)

R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 65, 227–248 (1998).
[Crossref]

D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B. Strachan, “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ. 90, 337–352 (2004).
[Crossref]

SIAM J. Imaging Sci. (1)

A. Rajwade, D. Kittle, T. Tsai, D. J. Brady, and L. Carin, “Coded hyperspectral imaging and blind compressive sensing,” SIAM J. Imaging Sci. 6, 782–812 (2013).
[Crossref]

Trans. ASAE (1)

M. D. Evans, C. Thai, and J. C. Grant, “Development of a spectral imaging system based on a liquid crystal tunable filter,” Trans. ASAE 41, 1845–1852 (1998).
[Crossref]

Trends Food Sci. Technol. (1)

A. Gowen, C. O’Donnell, P. Cullen, G. Downey, and J. Frias, “Hyperspectral imaging—an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18, 590–598 (2007).
[Crossref]

Other (6)

J. Zhang, D. Zhao, F. Jiang, and W. Gao, “Structural group sparse representation for image compressive sensing recovery,” in Data Compression Conference (IEEE, 2013), pp. 331–340.

Z. Zha, X. Liu, X. Huang, H. Shi, Y. Xu, Q. Wang, L. Tang, and X. Zhang, “Analyzing the group sparsity based on the rank minimization methods,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 883–888.

H. Arguello and G. R. Arce, “Restricted isometry property in coded aperture compressive spectral imaging,” in IEEE Statistical Signal Processing Workshop (SSP) (IEEE, 2012), pp. 716–719.

X. Wang and J. Liang, “Side information-aided compressed sensing reconstruction via approximate message passing,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 3330–3334.

J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with side information: geometrical interpretation and performance bounds,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, 2014), pp. 512–516.

Y. Fu, Y. Zheng, I. Sato, and Y. Sato, “Exploiting spectral-spatial correlation for coded hyperspectral image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 3727–3736.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Sketch of the LCTF-based hyperspectral imager.
Fig. 2.
Fig. 2. Examples of different projection matrices ( N = 401 , L = 100 ) for the original signal shown in (a): (b) the random projection matrix and (c) the TACS projection matrix.
Fig. 3.
Fig. 3. Sequential scan of the spectral channels and compressive measurements using multiple coded snapshots. Note the coarse resolution of the detector array compared with that of the coded aperture.
Fig. 4.
Fig. 4. Method to generate the TACS coded apertures for the hyperspectral imaging systems.
Fig. 5.
Fig. 5. (a) Original reference image; (b) the recovered image using a random coded aperture; and (c), (d) a pair of the TACS coded aperture patterns generated based on (b).
Fig. 6.
Fig. 6. (a) Original spectral images, (b) the simulated low-resolution images obtained by conventional system, (c) the reconstructed spectral images using TACS coded apertures, and (d) the reconstructed spectral images using random coded apertures. Magnified details are presented as well.
Fig. 7.
Fig. 7. (a) RGB image of the scene, and (b)–(d) the original and reconstructed spectral signatures for three representative points, indicated by P1, P2, and P3 in (a).
Fig. 8.
Fig. 8. Influence of three key factors on the reconstruction performance. (a) The average reconstructed PSNRs in all spectral channels using different sub-group numbers, (b) the curves of average reconstructed PSNRs with respect to different compression ratios, and (c) the average PSNRs and (d) runtime corresponding to different reconstruction algorithms.
Fig. 9.
Fig. 9. Examples of coded patterns generated by (a) Galvis’s method and (b) Yang’s method.
Fig. 10.
Fig. 10. (a) Original spectral images in four spectral channels, (b) the reconstructed spectral images using the TACS coding method, (c) the reconstructed spectral images using Galvis’s method, and (d) the reconstructed spectral images using Yang’s method.
Fig. 11.
Fig. 11. Testbed of the staring hyperspectral imager with the proposed TACS coded apertures.
Fig. 12.
Fig. 12. (a) RGB image of the target used in the experiment; (b) an example of random coded aperture patterns; and (c), (d) the examples of TACS coded aperture patterns.
Fig. 13.
Fig. 13. (a) Original images in four spectral channels, (b) the low-resolution images obtained by the conventional system, (c) the reconstructed images obtained by TACS coded apertures, and (d) the reconstructed images obtained by random coded apertures. The color maps of the error patterns corresponding to (e) TACS coded apertures and (f) random coded apertures.
Fig. 14.
Fig. 14. Original and reconstructed spectra for two representative points indicated by P1 and P2 as shown in Fig. 12(a).
Fig. 15.
Fig. 15. (a) Original spectral images with center wavelengths of 554 nm, 563 nm, and 572 nm; (b) the simulated low-resolution images obtained by the conventional system; (c) the reconstructed spectral images using the proposed multi-channel method, and the reconstructed spectral images using the single-channel method with (d) TACS coded apertures and (e) random coded apertures.

Tables (3)

Tables Icon

Table 1. Comparison of Mean Values of μ ϒ and μ ϒ ¯ Obtained by Random Projection Matrices and TACS Projection Matrices

Tables Icon

Table 2. Comparison Among Different Methods Using the Experimental Testbed, Including the Conventional System, and Our System Using Random Coded Apertures and TACS Coded Apertures

Tables Icon

Table 3. Comparison Among the Three Coding Methods, Including the Single-Channel Random Coding Method, Single-Channel TACS Coding Method, and Multi-Channel TACS Coding Method

Equations (45)

Equations on this page are rendered with MathJax. Learn more.

Θ ^ = arg min Θ Θ 1 , s.t. Y = Φ X = Φ Ψ Θ ,
μ = max { | ϕ i , ψ j | 2 } , i = 1 , 2 , , L and j = 1 , 2 , , N ,
S = X + ε ,
ϕ i j = { 1 + sgn ( S j Λ i j ) 2 N if 1 i L 2 1 sgn ( S j Λ i j ) 2 N if L 2 < i L ,
μ ¯ ϒ = Δ max ψ j ϒ E { | ϕ i , ψ j | 2 } > X 2 2 2 N K 2 θ max 2 , μ ¯ ϒ ¯ = Δ max ψ j ϒ ¯ E { | ϕ i , ψ j | 2 } 1 4 N { [ 2 μ X π ( σ Λ 2 + σ X 2 ) 1 ] m = 1 N ψ ^ m } 2 ,
μ ¯ ϒ = max ψ j ϒ E { | ϕ i , ψ j | 2 } > X 2 2 2 N K 2 θ max 2 , μ ¯ ϒ ¯ = max ψ j ϒ ¯ E { | ϕ i , ψ j | 2 } 0 .
g n λ ( x , y ) = f 1 n λ ( x , y ) · T c ( x , y ) d x d y = f 0 ( x , y , λ ) · T s n λ ( λ ) · T c ( x , y ) d x d y ,
g n λ , l ( x , y ) = f 0 ( x , y , λ ) · T s n λ ( λ ) · T c l ( x , y ) · d x d y .
G m x , m y n λ , l = n x = R ( m x 1 ) + 1 R m x n y = R ( m y 1 ) + 1 R m y F n x , n y , n λ · T n x , n y , n λ n λ , l ,
g n λ = Φ n λ f n λ + ρ n λ = Φ n λ Ψ Θ n λ + ρ n λ ,
Φ n λ = [ Φ n λ 1 0 0 0 Φ n λ 2 0 0 0 Φ n λ M x · M y ] ,
Θ ^ n λ = arg min Θ n λ Θ n λ 1 , s.t. g n λ Φ n λ Ψ Θ n λ 2 β ,
γ o m = γ o · 1 Q = L R 2 · Q ,
g m = Φ m f m + ρ m = Φ m Ψ m Θ m + ρ m ,
Φ m = [ Φ m 1 0 0 0 Φ m 2 0 0 0 Φ m M x · M y ] ,
Φ m i = [ Φ N 1 i , Φ N 2 i , , Φ N Q i ] ,
μ ¯ ϒ = max ψ j ϒ E { | ϕ i , ψ j | 2 } > 1 θ max 2 max ψ j ϒ E { | ϕ i , ψ j θ j | 2 } > 1 K 2 θ max 2 max ψ j ϒ E { | ϕ i , j = 1 K ψ j θ j | 2 } > 1 L K 2 θ max 2 E { i = 1 L | ϕ i , X | 2 } = 1 L K 2 θ max 2 E { | | Φ X | | 2 2 } ,
Φ X 2 2 = 1 4 N { i = 1 L / 2 { j = 1 N [ sgn ( S j Λ i , j ) + 1 ] X j } 2 + i = L / 2 + 1 L { j = 1 N [ 1 sgn ( S j Λ i , j ) ] X j } 2 } .
( Δ i ) 1 = [ j = 1 N sgn ( S j Λ i , j ) X j ] 2 + ( j = 1 N X j ) 2 + 2 j = 1 N sgn ( S j Λ i , j ) X j · r = 1 N X r , i L / 2 ,
( Δ i ) 2 = [ j = 1 N sgn ( S j Λ i , j ) X j ] 2 + ( j = 1 N X j ) 2 2 j = 1 N sgn ( S j Λ i , j ) X j · r = 1 N X r , L / 2 < i L .
E { Φ X 2 2 } = E { 1 4 N [ i = 1 L / 2 ( Δ i ) 1 + i = L / 2 + 1 L ( Δ i ) 2 ] } = 1 4 N { L 2 E [ ( Δ i ) 1 ] + L 2 E [ ( Δ i ) 2 ] } = L 8 N { E [ ( Δ i ) 1 ] + E [ ( Δ i ) 2 ] } .
T 1 = X r X j [ P r ( Λ i , r < X r ) P r ( Λ i , j < X j ) + P r ( Λ i , r > X r ) P r ( Λ i , j > X j ) ] , T 2 = X r X j [ P r ( Λ i , r > X r ) P r ( Λ i , j < X j ) + P r ( Λ i , r < X r ) P r ( Λ i , j > X j ) ] ,
E [ ( Δ i ) 1 ] = 2 X 2 2 + r = 1 N j = 1 , j r N ( T 1 T 2 ) + r = 1 N j = 1 , j r N X r X j + 2 ( r = 1 N X r ) · j = 1 N X j [ P r ( Λ i , j < X j ) P r ( Λ i , j > X j ) ] ,
E [ ( Δ i ) 2 ] = 2 X 2 2 + r = 1 N j = 1 , j r N ( T 1 T 2 ) + r = 1 N j = 1 , j r N X r X j + 2 ( r = 1 N X r ) · j = 1 N X j [ P r ( Λ i , j < X j ) P r ( Λ i , j > X j ) ] .
E [ ( Δ i ) 1 ] = 2 X 2 2 + 2 j = 1 N X j 2 [ 1 2 Q ( X j / σ Λ 2 + σ X 2 ) ] + 2 r = 1 N j = 1 , j r N X r X j [ 1 2 Q ( X r / σ Λ 2 + σ X 2 ) ] + r = 1 N j = 1 , j r N X r X j [ 1 2 Q ( X r / σ Λ 2 + σ X 2 ) 2 Q ( X j / σ Λ 2 + σ X 2 ) + 4 Q ( X r / σ Λ 2 + σ X 2 ) Q ( X j / σ Λ 2 + σ X 2 ) ] + r = 1 N j = 1 , j r N X r X j ,
E [ ( Δ i ) 2 ] = 2 X 2 2 2 j = 1 N X j 2 [ 1 2 Q ( X j / σ Λ 2 + σ X 2 ) ] 2 r = 1 N j = 1 , j r N X r X j [ 1 2 Q ( X r / σ Λ 2 + σ X 2 ) ] + r = 1 N j = 1 , j r N X r X j [ 1 2 Q ( X r / σ Λ 2 + σ X 2 ) 2 Q ( X j / σ Λ 2 + σ X 2 ) + 4 Q ( X r / σ Λ 2 + σ X 2 ) Q ( X j / σ Λ 2 + σ X 2 ) ] + r = 1 N j = 1 , j r N X r X j ,
E [ ( Δ i ) 1 ] + E [ ( Δ i ) 2 ] = 4 X 2 2 + 2 r = 1 N j = 1 , j r N X r X j [ 1 2 Q ( X j / σ Λ 2 + σ X 2 ) 2 Q ( X r / σ Λ 2 + σ X 2 ) + 4 Q ( X j / σ Λ 2 + σ X 2 ) Q ( X r / σ Λ 2 + σ X 2 ) ] + 2 r = 1 N j = 1 , j r N X r X j .
1 2 τ r 2 τ j + 4 τ r τ j > 1 .
μ ¯ ϒ = max ψ j ϒ E { | ϕ i , ψ j | 2 } > X 2 2 2 N K 2 θ max 2 .
μ ¯ ϒ ¯ = 1 4 N E { | [ sgn ( X Λ ^ ) + 1 ] , ψ ^ | 2 } = 1 4 N E { p = 1 N [ sgn ( X p Λ ^ p ) + 1 ] ψ ^ p } 2 = 1 4 N m = 1 N n = 1 N ψ ^ m ψ ^ n · E { [ sgn ( X m Λ ^ m ) + 1 ] [ sgn ( X n Λ ^ n ) + 1 ] } = 1 4 N m = 1 N n = 1 N ψ ^ m ψ ^ n · [ 4 4 Q ( X m / σ Λ 2 + σ X 2 ) 4 Q ( X n / σ Λ 2 + σ X 2 ) + 4 Q ( X m / σ Λ 2 + σ X 2 ) Q ( X n / σ Λ 2 + σ X 2 ) ] .
Q ( x ) 1 2 1 2 π x .
μ ¯ ϒ ¯ = max ψ j ϒ ¯ E { | ϕ i , ψ j | 2 } 1 4 N [ m = 1 N n = 1 N ψ ^ m ψ ^ n + m = 1 N n = 1 N ψ ^ n 2 X m ψ ^ m π ( σ Λ 2 + σ X 2 ) + m = 1 N n = 1 N ψ ^ m 2 X n ψ ^ n π ( σ Λ 2 + σ X 2 ) + m = 1 N n = 1 N 2 X m ψ ^ m X n ψ ^ n π ( σ Λ 2 + σ X 2 ) ] .
μ ¯ ϒ ¯ 1 4 N [ m = 1 N n = 1 N ψ ^ m ψ ^ n 2 2 μ X π ( σ Λ 2 + σ X 2 ) m = 1 N n = 1 N ψ ^ m ψ ^ n + 2 μ X 2 π ( σ Λ 2 + σ X 2 ) m = 1 N n = 1 N ψ ^ m ψ ^ n ] = 1 4 N { [ 2 μ X π ( σ Λ 2 + σ X 2 ) 1 ] m = 1 N ψ ^ m } 2 .
μ ¯ ϒ ¯ 1 4 N { [ 2 μ X π ( σ Λ 2 + σ X 2 ) + 1 ] m = 1 N ψ ^ m } 2 .
μ ¯ ϒ ¯ 1 4 N { [ 2 μ π ( σ Λ 2 + σ X 2 ) 1 ] m = 1 N ψ ^ m } 2 .
Ψ = Ω Ω = [ ω 11 Ω ω 1 n Ω ω n 1 Ω ω n n Ω ] .
i = 1 N ψ i j = k = 1 n l = 1 n ω l j / n ω k ( j j / n · n ) ,
ω k j = { 1 n if j = 1 2 n cos ( 2 k 1 ) ( j 1 ) π 2 n if j > 1 .
k = 1 n cos ( 2 k 1 ) ( j 1 ) π 2 n = 0 .
k = 1 n cos ( 2 k 1 ) a π 2 n = Re { k = 1 n exp [ ( 2 k 1 ) a π i 2 n ] } = Re [ exp ( a π i 2 n ) k = 1 n exp ( k a π i n ) ] ,
k = 1 n exp ( k a π i n ) = exp ( a π i n ) 1 [ exp ( a π i n ) ] n 1 exp ( a π i n ) = exp ( a π i n ) 1 exp ( a π i n ) 1 exp ( a π i n ) .
exp ( a π i 2 n ) k = 1 n exp ( k a π i n ) = exp ( a π i 2 n ) · exp ( a π i n ) 1 exp ( a π i ) 1 exp ( a π i n ) = exp ( a π i 2 n ) 2 1 exp ( a π i n ) .
i = 1 n ω i j = 0 , for j > 1 .
i = 1 N ψ i j = { N if j = 1 0 if j > 1 .
j = 1 N ψ j = 0 if ψ j ϒ ¯ .

Metrics