## Abstract

In this paper, we present a signal processing approach to improve the resolution of a spectrometer with a fixed number of low-cost, non-ideal filters. We aim to show that the resolution can be improved beyond the limit set by the number of filters by exploiting the sparse nature of a signal spectrum. We consider an underdetermined system of linear equations as a model for signal spectrum estimation. We design a non-negative${L}_{1}$norm minimization algorithm for solving the system of equations. We demonstrate that the resolution can be improved multiple times by using the proposed algorithm.

© 2012 OSA

## 1. Introduction

Miniature spectrometers are highly demanded in various industrial and domestic applications [1]. Modern miniature spectrometers are built with filter arrays to bring down the cost of the spectrometers. A filter array is usually fabricated using CMOS or Nano technology [2, 3] which brings down the size of the spectrometer in addition to its overall cost. Miniature spectrometers based on a fabricated filter array enhance the portability that helps to obtain in situ measurements. These spectrometers can be readily interfaced with personal computers and other electronic systems. In addition, filter array based spectrometers have the ability to capture a signal spectrum of a light source in a short duration.

In recent years, researchers focus on improving the resolution of filter array based spectrometers due to the following reasons. Filter array based spectrometers consist of fixed set of filters each with different transmission function. A filter with an ideal brick-wall transmission function that allows light only in a designed band and rejects light outside the band is deemed best for spectrometers. This is obvious since spectrometers with ideal filters directly give an estimate of the signal spectrum. In modern portable spectrometers, however, due to low-cost fabrication of the filter array, the shape of each filter transmission function may not be an ideal narrow brick-wall, but highly selective in amplitude and phase throughout the entire spectrum. Thus, the signal spectrum obtained from such a spectrometer is severely distorted which causes difficulty in resolving the distinct spectral components of a light source [4]. To alleviate this problem, digital signal processing (DSP) techniques which process the raw signal spectrum obtained from the spectrometers are shown to be helpful. The current literature [4,5] provides a few DSP techniques to improve the ability of the filter array spectrometers in resolving the distinct spectral components. Analytically, these DSP techniques solve a system of linear equations in order to estimate the signal spectrum. In order to solve the system of linear equations, an optimization method should be selected. In this paper, we consider two different criteria, one is the relatively new${L}_{1}$norm minimization (more details follow in Section 4.2) and the other is the classical${L}_{2}$norm minimization.

In [4], authors have solved a system of linear equations which is overdetermined. In an overdetermined system, the number of unknowns (spectral components) is smaller than or equal to the number of equations (the number of filters). Thus, the limit of spectral resolution is set by the number of filters in an array. For a given fixed set of non-ideal filters, improving the resolution beyond the limit amounts to solving an underdetermined system of linear equations. Hence, the authors have attempted to solve an underdetermined system of equations. They have used the classical ${L}_{2}$norm minimization criterion to obtain a unique solution to the underdetermined system. But their approach failed because the solution of an underdetermined system obtained using ${L}_{2}$norm minimization was completely off and did not give accurate solution. Therefore, without defining resolution [4], concluded that the number of filters in an array decides the maximum resolution achievable by a spectrometer. Similar to [4], the technique in [5] solves the overdetermined system using ${L}_{2}$norm minimization and hence the resolution obtained in [5] is also limited by number of filters. In a recent paper [6], the authors solved an underdetermined system using an ${L}_{1}$norm based minimization for the purpose of increasing the reconstruction accuracy.

In this paper, we develop a new framework that determines an achievable resolution of a spectrometer with the inclusive definition for reconstruction accuracy. We aim to show that the achievable resolution of a spectrometer can be improved multiple times beyond the limit set by the number of filters, which was originally discussed in [7]. We show that an ${L}_{1}$ norm minimization based approach is more suitable than a classical ${L}_{2}$ norm minimization for solving an underdetermined system of linear equations. We design a new spectrum estimation algorithm using ${L}_{1}$ norm minimization that exploits the prior information that the signal spectrum is sparse. We demonstrate that given a fixed set of filters with non-ideal filter characteristics, resolution can be increased multiple times beyond what is suggested by [4] and [5]. The technique proposed in this paper could be applicable to other related digital signal processing spectrometers usually employed in the determination of rice wine composition [8], liquid signature analysis [9] and organic detection [10].

This paper is organized as follows: Section 2 presents the system model and Section 3 discusses the concept of resolution and achievable limit of resolution of a spectrometer. Section 4 presents the sparse representation of the signal spectrum and ${L}_{1}$ norm minimization technique. Experimental results are discussed in Section 5 and Section 6 concludes the paper.

## 2. System description

A source of light, whose spectrum is to be estimated, falls on the filter array shown in Fig. 1 . The filter array consisting of $M$elements (filters) arranged in a 2D manner. Underneath this filter-array is the photo sensor array made of charge coupled devices (CCD) each of which converts the filtered-light into an electric charge. Each filter corresponds to a pixel element in the CCD array. The combined arrangement of filter and the CCD array constitute a spectral detector. The output of the spectral detector is then sampled and fed into a DSP unit to estimate the spectrum.

Let $x\left(\lambda \right)$represent the spectral component of the original light source at wavelength$\lambda $. Each element of the filter-array is specified in terms of its transmission function, which is a measure of fraction of light that the filter allows at a given wavelength$\lambda $. Let${f}_{i}\left(\lambda \right)$, $i=1,2,\cdots ,M,$ represent the transmission function of the *i*^{th} filter of the filter array. Figure 2
shows typical transmission functions of the filters in the filter array [4]. We note from Fig. 2 that the transmission functions are not ideal brick-wall because of low-cost fabrication of the filters. As evident from Fig. 2, the pass band of each filter transmission function leaks into the pass bands of neighboring filters as well. Therefore, each filter not only senses the light in its own pass band but also the light from the neighboring bands. Though this results in distortion of spectral information in a particular filter band, we observe that more than one filter senses the same spectral information.

Let $d\left(\lambda \right)$represent the sensitivity function of CCD detectors (see Fig. 1) which is assumed to be the same for all the elements. Let ${D}_{i}\left(\lambda \right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,M,$represent the sensitivity of the ${i}^{th}$spectral detector at a specified $\lambda $ given by${D}_{i}\left(\lambda \right)=d\left(\lambda \right){f}_{i}\left(\lambda \right)$. We note that each ${D}_{i}\left(\lambda \right)$ is a continuous function of wavelength$\lambda $. The output, ${y}_{i}$, of the ${i}^{th}$spectral detector is then given by ${y}_{i}={\displaystyle \int {D}_{i}\left(\lambda \right)x\left(\lambda \right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}d\lambda \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{i}\text{\hspace{0.17em}}$, where ${w}_{i}$is the observation or measurement noise.

We collect all $M$samples from the output of the spectral detector and arrange them in the form of a vector$y$, as $y={\left[{y}_{1},\cdots ,{y}_{M}\right]}^{T}$ . Our aim is to obtain a high resolution spectral estimate using a given, fixed (i.e.,$M$) number of filter elements (and thus a fixed number of elements in the CCD array). We model the output vector $y$ as a system of linear equations as follows:

where $D$is an $M\times N$ detector sensitivity matrix given byWe note that a value ${D}_{i}\left({\lambda}_{j}\right)$ in Eq. (2) is obtained by uniformly sampling the sensitivity function of the ${i}^{th}$spectral sensor along the wavelength axis. We also note that the conditional number of the matrix $D$may be high, because the non-ideal transmission functions make the rows of $D$correlated with each other. We also observe that the more common implementation of spectrometers is based on grating, an element which disperses the original light into its individual wavelengths. In the filter-array based miniature spectrometers, considered in this paper, the role of the dispersive grating is taken by the filter-array. That is, each filter in the filter-array acts as a grating element and passes only a few wavelengths of the original spectrum. Our method is applicable to grating based spectrometers as well. We only need to know the transmission function of all the grating elements (in our case the filter transmission functions). Once they are known, they can be arranged in the rows of the matrix *D* in Eq. (2). We model each component of the $M\times 1$vector $w$ as a Gaussian random variable with zero-mean and variance${\sigma}^{2}$.

Let $x={\left[{x}_{1},{x}_{2},\cdots ,{x}_{N}\right]}^{T}$represent a signal spectrum vector obtained by uniformly sampling the continuous signal spectrum $x\left(\lambda \right)$ at wavelengths${\lambda}_{1},{\lambda}_{2},\cdots ,{\lambda}_{N}$. Let ${W}_{\lambda}$denote the total bandwidth of the signal$x$. Let $\Delta {\lambda}_{N}={\scriptscriptstyle \frac{{W}_{\lambda}}{N}}$denote the spacing between the samples of $x$. Our goal is to solve for an estimate $\widehat{x}$of the signal spectrum $x$ from the observation $y$ given the detector sensitivity matrix$D$. We measure the reconstruction accuracy of the signal spectral estimate in terms of its mean square error (MSE) defined as follows: $\text{MSE}=\frac{1}{N}{\displaystyle \sum _{i=1}^{N}{\left({x}_{i}-{\widehat{x}}_{i}\right)}^{2}}$where ${x}_{i}$denotes the ${i}^{th}$component of $x$. In the noiseless case, the system in Eq. (1) is overdetermined if $M>N$ and it is underdetermined if $M<N$.

## 3. Limit on resolution of a spectrometer

The resolution of a spectrometer is determined by its ability to distinguish two closely spaced spectral components. At a given spacing$\Delta {\lambda}_{N}={\scriptscriptstyle \frac{{W}_{\lambda}}{N}}$, we define the maximum achievable resolution of a spectrometer as$\Delta {\lambda}_{\mathrm{max}}={\mu}_{N}\Delta {\lambda}_{N}={\mu}_{N}{\scriptscriptstyle \frac{{W}_{\lambda}}{N}}$, where ${\mu}_{N}$is given by

In the DSP technique proposed in [4, 5], the resolution was ${\mu}_{M}{\scriptscriptstyle \frac{{W}_{\lambda}}{M}}$ (because$N=M$). Also, the authors in [4, 5] have considered the case$N>M$; but concluded it is of no use to increase *N* > *M* since the MSE becomes very large whenever $N>M$ using the ${L}_{2}$norm recovery technique. Using the methods in [4, 5], increasing the number of filters was the only way to improve the resolution further.

In this paper, we aim to show that the resolution can be improved to ${\mu}_{N}{\scriptscriptstyle \frac{{W}_{\lambda}}{N}}$ from ${\mu}_{M}{\scriptscriptstyle \frac{{W}_{\lambda}}{M}}$ with the same fixed *M* number of filters. The resolution improvement factor is thus ${\scriptscriptstyle \frac{{\mu}_{M}N}{{\mu}_{N}M}}$. When we increase $N$beyond$M$, the system Eq. (1) becomes underdetermined (in the noiseless case). An underdetermined system which is solved by considering classical${L}_{2}$norm minimization does not improve the spectral resolution as shown in [4].

Alternatively, we aim to solve the underdetermined system using ${L}_{1}$norm minimization techniques [11–13]. To aid ${L}_{1}$norm minimization, we exploit the prior information that a natural signal spectrum has a degree of freedom which is often much smaller than its ambient dimension. Such a signal can be represented as a sparse signal in a certain domain. A sparse signal is often represented as a vector whose elements are all zero except a few non-zero components. For example, a time domain signal of two tones can be described as a sparse signal in the Fourier domain. The spectrum of the two tone signal has only two non-zero components while the rest of spectral components are zero. The theory of compressed sensing is built on the assumption that natural signals are sparse in a certain domain such as the Fourier, the Wavelet and other transform domains. We will discuss more about sparse signal modeling in Section 4. Given that the signals are sparse in a certain domain, the advantages of using the${L}_{1}$norm minimization over its ${L}_{2}$counterpart are as follows:

- 1. ${L}_{1}$norm minimization yields the sparsest solution to an underdetermined system [13].
- 2. ${L}_{1}$norm minimization algorithms are shown to give robust performance against the measurement noise. Usually, the ${L}_{2}$based solutions require an inverse of the matrix$D{D}^{T}$. Since the filter transmission functions are correlated (as discussed in Section 2), the conditional number of the matrix$D{D}^{T}$will be high which enhances the noise level of the spectral estimate [4]. Even in the noiseless case, the${L}_{2}$based solution does not guarantee an improved spectral estimate. These problems can be avoided by using the${L}_{1}$norm minimization approach.

We observe, however, that we cannot increase $N$ (decrease$\Delta {\lambda}_{N}$) indefinitely to obtain an infinite resolution. Given a fixed$M$_{,} there is a limit for increasing$N$. This limit depends on the detector sensitivity matrix as well as the sparsifying basis (explained in Section 4). Given a fixed number of non-ideal filters, we aim to find the limit of the resolution that can be achieved in practice. In the next section, we discuss sparse modeling of the signal spectrum and ${L}_{1}$ norm minimization technique for solving an underdetermined system of equations.

## 4. Sparse representation for resolution improvement

#### 4.1 Background and sparse data modeling

Signal representations have always been at the heart of most digital signal processing techniques. The purpose of such representations is to explicitly reveal the useful information embedded in a signal. This is typically performed by using a linear (matrix) transformation. Recently, the focus has turned toward the sparse representation of a signal [12, 13]. Sparse representation of a signal is fundamental in many applied sciences such as signal, audio and image processing, biomedical sciences, seismic processing, astronomy, machine learning and the emerging area of compressive sensing [14, 15].

Any natural signal or a vector$x$ in Eq. (1) can be directly sparse or represented as sparse in a certain basis, i.e., $x=\Psi s$. The basis $\Psi $is an $N\times N$matrix called sparsifying basis and the signal $s$is $K$-sparse, i.e., only $K$ components of $s$ are non-zero and the remaining $N-K$ components are zero. Therefore, the natural signal is a linear combination of only $K$ columns of the matrix$\Psi $. We note that when$\Psi =I,$the identity matrix, $x=s$; such a signal $x$ is called a directly sparse signal, i.e., it is inherently sparse [15].

In this paper, we model the original signal spectrum $x$ in Eq. (1) as a linear combination of $K$Gaussian kernels [5, 6], i.e., $x=\Psi s$. The reason for using Gaussian kernels is that smooth Gaussian kernels can preserve the smooth features of the signal spectrum. In addition, specification of a Gaussian kernel requires only two parameters, namely, the location and width, which can be chosen according to the nature of the signal spectrum in a particular application. To construct the kernel matrix$\Psi $, we sample a single Gaussian kernel which has certain full-width at half-maximum (FWHM). The sampled kernel then forms the first column of$\Psi $. The remaining $N-1$columns of $\Psi $are just shifted versions of the first column. We observe that the spacing between the samples of the Gaussian kernel is also$\Delta {\lambda}_{N}$.

Now, using the sparse model$x=\Psi s$, Eq. (1) can be rewritten as

We note that the dimension of $y$ is$M\times 1$, $D$is $M\times N$with $M<N$ and $s$ is $N\times 1$. Our goal is to obtain an estimate $\widehat{s}$of $s$from$y$in Eq. (4). Note that the dimension of $y$is much smaller than the dimension of the sparse signal$s$. After the sparse representation, we employ the${L}_{1}$norm minimization criterion for unique recovery of the sparse signal from the measurement vector$y$. We discuss our recovery algorithm in Section 4.3. Before that, we point out a few similarities and differences of our approach from the closely related area of compressed sensing (CS).

As we have mentioned briefly, CS also exploits the sparse nature of signals by means of sparse representation. The purpose of CS as the name indicates is to compress a given signal of a fixed ambient dimension. The quality of the signal degrades with aggressive compression. The papers [16] and [17] are well known compressive sensing examples applied in optical systems. The purpose of this paper, however, is to improve the quality (resolution) of the recovered signal given a fixed number of observations of the signal, utilizing the prior information about the signal. The signal dimension is increased (by increasing$N$) purposely. We find the resolution limit beyond which any two closely spaced spectral components are no longer resolvable. In this regard, our perspective is completely different from that of the CS.

In CS, a sensing matrix is designed for efficient compression. The sensing matrix is usually designed in such a way that it captures maximum information about the signal with as few numbers of compressed samples as possible. In general, best sensing matrices can be tailor made to suit the need of a particular application. Most common ones are the iid Gaussian sensing matrices [15].

We note that the role of the sensing matrix in CS is taken by the sensitivity matrix *D* in our problem. Unlike CS, we purposely increase the number of columns in the detector sensitivity matrix for a better resolution. We note from Section 2 that the sensitivity matrix $D$ is determined from the filter fabrication process. It is possible that the matrix $D$ can be designed as in CS to obtain better recovery results. We may pose this as a new optical filter design problem. The transmission function of each filter is nothing but a row of the sensitivity matrix. The transmission functions of filters can be designed following the guidelines provided in the CS literature. We plan to explore this interesting research direction in the near future.

After the sparse representation is done, the next step is to uniquely recover the sparse signal$s$from$y$. In the next section, we discuss a few optimization methods that estimate the sparse signal. In Section 4.3, we present our new recovery algorithm based on ${L}_{1}$norm minimization.

#### 4.2 Optimization methods

We recall that we are given with only $M(<N)$ number of raw spectral measurements. We need to estimate $N$number of unknowns that appear in Eq. (4). Clearly, Eq. (4) is an underdetermined system. The conventional way to estimate the sparse signal $s$from $y$is by ${L}_{2}$norm minimization techniques [4]. However, ${L}_{2}$norm based techniques produce non-sparse solutions [11]; hence they are not useful for sparse signal recovery. We will return to this point shortly. In this section, we introduce ${L}_{1}$norm minimization techniques [18–20] which yield unique, sparse solutions for the underdetermined system of linear equations.

We recall that our signal model is$y=D\Psi s+w$, where $s$ is a $K$sparse signal. The best, brute-force approach for recovering $s$ is to search for the sparsest vector $s$which is consistent with the measurement$y$. This leads to solving the following ${L}_{0}$norm minimization problem:

In sparse signal estimation, the${L}_{2}$ solution gives a non-sparse solution—as we have mentioned already—when the${L}_{1}$solution is exact. Why? Our quick answer to this important question is that the ${L}_{1}$ technique mimics the behavior of the${L}_{0}$ technique which is an optimal but brute-force method for sparse signal estimation. The more elaborate answers to this question can be found in [11, 15]. For example, Baraniuk (in [15] page 119) says, “Unfortunately, ${L}_{2}$ minimization will almost never find a *K*-sparse solution, returning instead a non-sparse ** s** with many nonzero elements. The ${L}_{2}$ norm measures signal energy and not signal sparsity.”

#### 4.3 The proposed non-negative ${L}_{1}$minimization (NNLM) algorithm

In order to find an optimal estimate$\widehat{s}$, the above ${L}_{1}$norm minimization problem is usually recast as a linear program, which can be solved efficiently. We assign $A=D\Psi $ and define the relaxed version of Eq. (6) as

where $\tau $ is a non-negative parameter. We now cast the minimization in Eq. (7) as a linear programming problem with nonnegative constraint $s\ge 0$ (as the signal spectrum is non-negative) asIn this paper, we have adopted the modern interior point method called primal-dual approach [20] that solves the above linear programming problem in order to find the optimal signal spectrum estimate. A pseudo code of the algorithm is given in Table 1 .

The following ${L}_{1}$minimization problem called ${L}_{1}$regularized least squares was adopted in [6] from Kim *et al.* [21].

Kim *et al.* have solved the above problem using a classical interior-point method called logarithmic barrier approach. In this paper, we refer to this algorithm as log-barrier interior point (LBIP) algorithm. The goal of the LBIP is to make the inequality constraints in Eq. (9) implicit in the objective function.

In our method, we have solved the problem in Eq. (8) which can be categorized as an${L}_{1}$ minimization with quadratic constraint. Our optimization approach is completely different from Eq. (9). First, their objective functions are different and the ways of incorporating the constraints into the objective function are different as well. Hence, the solution paths taken by these algorithms are distinct. Second, we have designed a new algorithm by adopting the modern primal-dual interior-point approach. The algorithms based on primal-dual approaches are often shown [20, 22] to be more efficient than the logarithmic barrier methods, especially when high accuracy is required. As mentioned in the review article [22] about interior-point methods, the general opinion today is that primal-dual methods offer the greatest promise for faster and more reliable algorithms. We have verified these thoughts through simulation whose results are shown in Section 5. The algorithmic level differences between the logarithmic barrier and the primal dual approaches can be referred (from [20] p. 609; [22], p. 527).

## 5. Results and discussions

In this section, we aim to demonstrate the performance of the proposed NNLM. We consider a filter array with $M=40$ elements, as in [4]. The typical transmission functions of 1st, 10th and the 30th filter are shown in Fig. 2. The operating wavelength range of the spectrometer is from 400nm to 1200nm that corresponds to a ${W}_{\lambda}$of 800nm. We vary the filter length *N* in order to find the maximum possible resolution$\Delta {\lambda}_{\mathrm{max}}$.The value of $N$starts from $M$(that corresponds to the methods in [4, 5]) and increased in steps of 40, i.e., *N* = 40, 80, 120, 160, 200, 240. For$N=40$, the minimum spacing between two adjacent spectral components is 20nm, i.e.,$\Delta \lambda =20$nm (from Section 2). We generate the Gaussian kernels for various $N$with FWHM = 40nm. We have also considered non Gaussian kernels such as Lorentzians and Secant Hyperbolic. Investigation of the method using these kernels has been suggested by an anonymous reviewer during the review process of this paper. This will solidify our claim that our method works for generally shaped signals. We report here that we have obtained similar results as those obtained by using the Gaussian Kernels in the aspects such as ability to reconstruct two closely spaced Lorentzians and Secant Hyperbolic peaks. We define the signal-to-noise ratio (SNR) in dB for the model in Eq. (4) as$\text{SNR}={\scriptscriptstyle \frac{{\Vert D\Psi s\Vert}^{2}}{{\sigma}^{2}}}$.

We first consider the filter length$N=240$. The value of $\delta $required in Eq. (3) is set as 1e-3. The original signal spectrum has a sparsity of$K=2$, i.e., two spectral components. We choose two consecutive spectral components, as resolution is an issue to see if two spectral components that are close to each other can be resolved or not. Figure 3(a) shows the original signal spectrum, which is a linear combination of two closely spaced spectral components that are located at wavelengths of 828.46nm and 831.8nm respectively, as shown in Fig. 3(b).

Since the two spectral components are close to each other they are not clearly visible in the original signal spectrum. We aim to resolve these components in the sparse domain.

Figure 4(a) shows the output samples of the spectral detector at 60 dB SNR from which we need to estimate the spectral components. Figure 4(b) shows the reconstructed signal spectrum using the NNLM approach. It is evident from Fig. 4(b) that NNLM is able to clearly reconstruct the original signal spectrum.

In order to resolve the closely spaced spectral components, we show the reconstructed signal in the sparse domain in Fig. 5(a) . It is evident from Fig. 5(a) that the dominant spectral components are clearly resolved. However, two additional spectral components appear at 805.02nm and 871.97nm whose magnitudes are very small. These additional spectral components are due to the error in the reconstruction of the sparse signal. We note that these additional components do not affect the resolvability of the dominant spectral components, originally present in the signal spectrum. Since the two spectral components that are $\Delta {\lambda}_{N}$apart from each other are distinctly resolved, the value of${\mu}_{N}=1$. Hence, the maximum spectral resolution obtained by using $N=240$is $\Delta {\lambda}_{\mathrm{max}}=3.33\text{nm}$ which is 6 times better than the conventional limit of 20nm assuming${\mu}_{M}=1$.

Recently, it became well known in the sparse representation and compressed sensing literature [15] that conventional ${L}_{2}$norm solutions for underdetermined systems do not provide an accurate estimate. This fact has been verified in our experiments as well by observing that it is not possible to resolve the closely spaced spectral components in the sparse domain using the classical${L}_{2}$techniques. To illustrate this fact, we show in Fig. 5(b) a portion of a signal in sparse domain recovered using the ${L}_{2}$technique. It is evident from the figure that the ${L}_{2}$technique does not give the sparse signals and is not able to resolve the two closely spaced spectral components in the sparse domain.

We now illustrate the performance of the proposed NNLM in the reconstruction of generally shaped spectrums with multiple peaks at 40 dB SNR. Figure 6 shows an original spectrum with peaks at 564nm, 761.5 nm and 875.3 nm. It is evident from Fig. 6 that NNLM is able to resolve the three dominant spectral components. We also include in Fig. 6 the estimate of the signal spectrum obtained by LBIP algorithm. It is evident from Fig. 6 that unlike our NNLM algorithm, the LBIP algorithm underestimates the original signal spectrum, especially along the tails and at the peaks of the signal spectrum. For instance, the dominant peak of the original signal spectrum at 875.3 nm is not detected as dominant by the LBIP algorithm. The MSE (defined in Section 2) for the proposed NNLM algorithm is 2.03e-5, whereas, for the LBIP algorithm it is 5.6e-3.

Figure 7 shows the original and the reconstructed signals in the sparse domain using the proposed NNLM and the LBIP algorithm. The original signal spectrum shown in Fig. 6 is modeled as a linear combination of three Gaussian kernels. The locations of the non-zero components of the sparse vector are 1, 60 and 240. In Fig. 7, it should be noted that the proposed NNLM detects all the original non-zero components, whereas the LBIP algorithm does not detect the second dominant non-zero component at all. Both, the NNLM and the LBIP, make some detection errors because of the presence of observation noise; but the errors made by NNLM are less significant than those by the LBIP. Based on these results and observations, we conclude that the proposed NNLM outperforms the LBIP algorithm in terms of the MSE performance.

## 6. Summary and conclusions

In this paper, we discussed a signal processing approach to improve the resolution of portable spectrometers beyond the limit set by the number of filters. We modeled the signal spectrum estimation as a procedure for solving an underdetermined system of linear equations. We designed a novel non-negative ${L}_{1}$norm minimization (NNLM) algorithm that exploits the sparse nature of the signal spectrum. We illustrated the performance of the proposed NNLM in improving the resolution of the signal spectrum through various experiments.

## Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MEST) (Do-Yak Research Program, No. 2011-0016496).

## References and links

**1. **D. J. Brady, *Optical Imaging and Spectroscopy*, (John Wiley and Sons, 2009).

**2. **S. W. Wang, X. Chen, W. Lu, L. Wang, Y. Wu, and Z. Wang, “Integrated optical filter arrays fabricated by using the combinatorial etching technique,” Opt. Lett. **31**(3), 332–334 (2006). [CrossRef] [PubMed]

**3. **S. W. Wang, C. Xia, X. Chen, W. Lu, M. Li, H. Wang, W. Zheng, and T. Zhang, “Concept of a high-resolution miniature spectrometer using an integrated filter array,” Opt. Lett. **32**(6), 632–634 (2007). [CrossRef] [PubMed]

**4. **C. C. Chang and H. N. Lee, “On the estimation of target spectrum for filter-array based spectrometer,” Opt. Express **16**(2), 1056–1061 (2008). [CrossRef]

**5. **U. Kurokawa, B. I. Choi, and C.-C. Chang, “Filter-based miniature spectrometers: spectrum reconstruction using adaptive regularization,” IEEE Sens. J. **11**(7), 1556–1563 (2011). [CrossRef]

**6. **C. C. Chang, N. T. Lin, U. Kurokawa, and B. I. I. Choi, “Spectrum reconstruction for filter-array spectrum sensor from sparse template selection,” Opt. Eng. **50**(11), 114402 (2011). [CrossRef]

**7. **H. N. Lee, Introduction to Compressed Sensing (Lecture notes; Spring Semester, GIST, South Korea, 2011).

**8. **H. Y. Yu, X. Y. Niu, H. J. Lin, Y. B. Ying, B. B. Li, and X. X. Pan, “A feasibility study on on-line determination of rice wine composition by Vis-NIR spectroscopy and least-squares support vector machines,” Food Chem. **113**(1), 291–296 (2009). [CrossRef]

**9. **H. Chen and H. Tang, “Application of miniature spectrometer in liquid signature analysis technology,” Appl. Opt. **50**(26), 5093–5098 (2011). [CrossRef] [PubMed]

**10. **N. J. Chanover, D. A. Glenar, D. G. Voelz, X. Xiao, R. Tawalbeh, P. J. Boston, W. B. Brinckerhoff, P. R. Mahaffy, S. Getty, I. Ten Kate, and A. McAdam, “An AOTF-LDTOF spectrometer suite for in situ organic detection and characterization,” in *Proceedings of IEEE Aerospace Conference* (2011), pp. 1–13.

**11. **S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Rev. **43**(1), 129–159 (2001). [CrossRef]

**12. **D. L. Donoho, M. Elad, and V. Temlyakov, “Stable recovery of sparse over complete representations in the presence of noise,” IEEE Trans. Inf. Theory **52**, 6–18 (2006).

**13. **D. L. Donoho, “For most large underdetermined systems of linear equations, the minimal l1-norm solution is also the sparsest near solution,” Commun. Pure Appl. Math. **59**, 907–934 (2006). [CrossRef]

**14. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**, 1289–1306 (2006).

**15. **R. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. **24**(4), 118–121 (2007). [CrossRef]

**16. **D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE **6065**, 43–52 (2006).

**17. **M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**(2), 83–91 (2008). [CrossRef]

**18. **D. L. Dohono and Y. Tsaig, “Fast solution of L1-norm minimization problems when the solution may be sparse,” IEEE Trans. Inf. Theory **54**, 4789–4812 (2008).

**19. **J. A. Tropp, “Just relax: Convex programming methods for identifying sparse signals,” IEEE Trans. Inf. Theory **52**, 1030–1051 (2006).

**20. **S. Boyd and L. Vandenberghe, Convex Optimization, (Cambridge University Press, 2009).

**21. **S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An interior-point method for large-scale l1-regularized least squares,” IEEE J. Sel. Top. Signal Process. **1**, 606–617 (2007).

**22. **A. Forsgren, P. E. Gill, and M. H. Wright, “Interior methods for nonlinear optimization,” SIAM Rev. **44**(4), 525–597 (2002). [CrossRef]