We introduce the concept of a metamaterial aperture, in which an underlying reference mode interacts with a designed metamaterial surface to produce a series of complex field patterns. The resonant frequencies of the metamaterial elements are randomly distributed over a large bandwidth (18–26 GHz), such that the aperture produces a rapidly varying sequence of field patterns as a function of the input frequency. As the frequency of operation is scanned, different subsets of metamaterial elements become active, in turn varying the field patterns at the scene. Scene information can thus be indexed by frequency, with the overall effectiveness of the imaging scheme tied to the diversity of the generated field patterns. As the quality (-) factor of the metamaterial resonators increases, the number of distinct field patterns that can be generated increases—improving scene estimation. In this work we provide the foundation for computational imaging with metamaterial apertures based on frequency diversity, and establish that for resonators with physically relevant -factors, there are potentially enough distinct measurements of a typical scene within a reasonable bandwidth to achieve diffraction-limited reconstructions of physical scenes.
© 2013 Optical Society of America
Computational imaging schemes encompass a broad perspective on the process of collecting and processing scene information. The diffraction limit associated with a given aperture dimension effectively partitions a scene into a finite number of pixels (or voxels), implying that the scene may be represented digitally without loss of fidelity. However, the order and manner in which these voxels are accessed by the imaging apparatus are infinitely flexible, and certain approaches are preferential for certain classes of scenes. Imaging systems based on incoherent light typically either populate the image plane with an array of fixed detectors that acquire information in parallel, or mechanically scan a smaller number of detectors that acquire scene information serially. In both cases, a system of optics—often quite complex—is typically used to transmit the scene to the aperture in as pristine a condition as possible.
The use of coherent light introduces alternative imaging paradigms, such as holography, which can often obviate the need for lenses and other optical components. Moreover, phase coherent measurements can provide depth information about objects, providing a nearly tomographic representation of a scene. A conceptually straightforward approach to image formation using coherent light might be to generate a beam or series of beams holographically that illuminate a diffraction-limited portion of a scene. Making no assumptions about the scene, it becomes evident that to recover the diffraction-limited information present in the scene, the scene must be sampled at a spatial density equivalent to its space-bandwidth product (SBP) by scanning (or dynamically reconfiguring) the aperture to interrogate the scene with nonoverlapping pencil beams.
Collecting scene information by scanning a beam over a scene is intuitive. However, it should also be evident that any field patterns (henceforth measurement modes), suitably distinct in terms of spatial overlap, may also suffice to recover the scene; what is of consequence is simply that the measurement modes be distinct in a sense that will be discussed below. In the absence of any information about the scene, presumably any set of measurement modes should be equivalent to any other. However, once information about the scene and system is known, this equivalence breaks down. Depending on the quantity and distribution of information contained in a scene, the manner in which noise is present in the imaging system, and interest in particular classes of objects within the scene, certain mode sets may be superior to others . In fact, since all natural scenes are known to be compressible in some basis, natural scenes can be perfectly recovered with significantly fewer measurement modes than the SBP () [2,3] provided that a set of optimal measurement modes is utilized. This statement forms the basis for compressive computational imaging. Given the potential for compression and the potential classification on the physical layer, a fully reconfigurable aperture introduces fundamental capabilities for imaging scenarios, particularly the capability to preferentially select a set of measurement modes best suited to the particular scene.
A phased-array system fulfills the description of a reconfigurable aperture and can, in principle, provide a limitless set of measurement modes. However, the drawbacks that have inhibited phased-array prevalence in applications are its significant cost, weight, and power requirements for implementing the required number of sources, phase shifters, and associated amplifiers circuitry to generate measurement modes. The reality that the measurement modes are of consequence and not necessarily the methods for forming or detecting them suggests that we may seek new computational imaging modalities for coherent light that can potentially provide similar functionality, but with reduced cost and complexity.
One such alternate path is that of a holographic optic. A hologram is a recorded series of fringes formed by interference of a scattered field with a reference beam. Upon illumination by a reference beam, the hologram produces the desired pattern of light, which can be considered a mode in the context of the above discussion. Because the hologram consists of a pattern of light and dark fringes, computer-generated holograms can be patterned directly without requiring scattering from actual objects, allowing the generation of masks that produce nearly any type of mode under illumination by a plane wave or other reference wave. A sequence of holographic masks, then, can be used to produce a sequence of measurement modes, potentially optimized for scene characteristics and computational imaging approaches. A related approach was used to form a single-pixel terahertz imaging system , in which a series of masks that were pixelated with opaque and transparent regions was used to produce the measurement modes.
Holograms have traditionally been recorded using photosensitive films exposed to the interfering fields. In more recent research trends, the use of artificially structured metamaterials to achieve diffractive optical elements has been pursued [5–7]. Artificial materials exhibit two key advantages: first, they offer access to designed electromagnetic properties that may be difficult or impossible to find in naturally occurring media. A second, potentially more revolutionary, advantage is that artificial materials present the potential for dynamic tuning , which could enable phased-array-level control over measurement modes in a package with the low cost and simplicity of holographic apertures.
One particular implementation of holographic imaging using metamaterials at microwave frequencies, which serves as the subject of the present analysis, is that of a guided-mode metamaterial imager (henceforth metaimager) that radiates via coupling a guided wave to a set of resonant, metamaterial elements distributed along the propagation path . This configuration is strongly related to the well-known leaky-wave antenna [10–14] conventionally used to produce a directed beam whose angle varies with the input frequency. For conventional beam forming, this frequency diversity is often viewed as a drawback.
As presented, the metaimager is a single-pixel device that performs sequential measurements of a scene using a frequency-based encoding of the measurement modes. Because the number of measurement modes is equal to the number of distinct patterns that can be generated over a given frequency bandwidth, the aperture design strategy is to maximize frequency diversity. Thus, the metaimager is populated with metamaterial elements whose resonance frequencies are distributed randomly over a given bandwidth, each with as large a quality (-) factor as possible. The resulting aperture produces a sequence of illumination patterns that vary rapidly as a function of frequency and are well suited for compressive imaging of canonically sparse scenes. The advantage of imaging using frequency diversity is that a series of measurement modes can be obtained using a single frequency scan, avoiding mechanical scanning, multiple detectors, or even reconfigurable elements.
The use of spatially diverse nonoverlapping measurement modes is designed to preferentially enable computational compressive recovery of information from a canonically sparse scene [15–18]. In a recently reported preliminary experiment , a microstrip-based metaimager using frequency diversity has been demonstrated to be capable of resolving objects in a 4000 pixel sparse scene using only 101 frequency-diverse measurements (within K-band 18–26 GHz). The extension of the holographic surface concept to a two-dimensional (2D) aperture has also been demonstrated, suggesting that full imaging and ranging through the use of frequency diversity or a combination of frequency-diversity and active tuning techniques is possible . This frequency scanned metamaterial imager provides an important proof of concept that more advanced and novel imaging modalities can be achieved in coherent imaging schemes through the use of complex, designed apertures.
In this work, we introduce the metaimager approach and provide an analysis of image recovery protocols given various imaging scenarios. The paper is structured as follows: in Section 2 we discuss a general forward model that enables us to illustrate the basic features of the coherent measurement process. Section 3 discusses metrics by which the measurement quality of a coherent imager can be judged. In Section 4 we model each of the metaimager’s metamaterial elements as a radiating dipole  fed by a guided wave in much the same fashion as a leaky-wave antenna. In this manner we abstract the key feature of the metamaterial imager, which is the point-by-point control over the amplitude and phase coupling between a guided propagating wave and the radiated field. Using this dipole model, we proceed to present simulated radiation patterns of the metaimager. Next, in Section 5, we use the general forward model and figures of merit discussed earlier and apply them to our specific metaimager antenna implementation, and finally, in Sections 6 and 7, we present simulations of 2D and three-dimensional (3D) scene reconstructions.
2. COHERENT IMAGING MODEL
Consider a system in which a transmitting aperture with optical axis along , fed by a single source, coherently illuminates a scene with a particular field pattern, or mode. The projected field scatters from objects in the scene, and backscatter components are received by the same aperture. Each projected field pattern thus represents a measurement of the scene, and we are interested in using a discrete set of field measurements to estimate the scene. In the following derivation, we use the vector to indicate points in the scene and to indicate points on the aperture plane, defined as parallel and infinitesimally close to the aperture (see Fig. 1). We also use the vector for the location of the source and the detector.
Suppose is the impulse response describing the field at the aperture plane due to the point source . In this fashion incorporates the physics associated with the aperture’s feed and radiation, and we can describe the field distribution across a plane located just above the aperture as21]. These fields, incident from the aperture onto an object-free region, are the solution to the Helmholtz equation . If an object is present, however, we can describe it as a spatially varying perturbation 3) and ignoring second-order terms (first Born approximation) yields 4) describes our scene, which we view as a scattering density due to an index perturbation in the free-space region. Assuming is not strongly perturbed by the objects’ presence, we rewrite the scene as 4): 2), we note that we can rewrite as 8) and solving for yields 1)–(6) and (9) into Eq. (7), normalizing the source to , using the reciprocity of the transfer functions, and rearranging the order of integration yields 10) becomes
If we assume the aperture is composed of a finite collection of discrete elements, we can discretize it as well and replace the convolution integrals of Eqs. (2) and (6) with summations. Following discretization of the scene and aperture, the measurement from Eq. (11) can now also be expressed as a sum:
Having arrived at Eq. (14), which describes our general coherent imaging forward model, we now turn to discussing figures of merit by which the aperture’s imaging abilities can be judged.
3. MEASUREMENT MATRIX AND FIGURES OF MERIT
Equation (14) can be expressed as19) to find . Thus, to achieve the maximal diffraction-limited information associated with a given scene, independent measurements corresponding to the SBP must be made. However, this does not mean that all set of measurements are equal, or that any measurements will completely capture the information in the scene. There is the complication of independent measurements; for many aperture mode sets, it is likely that the measurement modes will have varying degrees of overlap. Therefore, it is possible that even a measurement matrix constructed from modes may effectively undersample the scene. The situation can be improved by redesigning the modes, by acquiring greater than modes, or by the use of sparse recovery algorithms that allow image recovery from undersampled data [15–18].
With knowledge of the noise model, we wish to obtain a figure of merit for the suitability of a measurement matrix to estimate a scene in a compressive framework. Classic compressed sensing has used probability theory to show that random matrices obey the restricted isometry property (RIP) with high probability . Obeying RIP guarantees reconstruction accuracy even in the presence of noise . More recently, further work has shown that classes of deterministic matrices are also effective compressed-sensing matrices if they obey the statistical restricted isometry property (StRIP) . However, these deterministic matrices have strict requirements that can rarely be met in practice. For this reason we turn to a more empirical measure of the ability of a matrix to reconstruct sparse signals. Duarte-Carvajalino and Sapiro , inspired by the work of Elad , proposed a metric suitable for deterministic matrices based on the off-diagonal elements of the Gram matrix25]). The matrix reconstruction metric is the average mutual coherence 25], calculated using the rasterized scene and its approximation according to
4. METAMATERIAL APERTURE
The metaimager (Fig. 2) consists of a parallel-plate waveguide in which the top plate is patterned with complementary metamaterial elements —patterned voids in a conducting sheet forming the Babinet equivalent of their volumetric counterparts [28–31]. Complementary elements were proposed as a means of introducing additional design options in surface and guided-wave devices , in which resonant elements can be used for filtering and the modification of other propagation properties [33–36].
A variety of excitations can be applied to the 2D waveguide, including monopole sources that launch guided waves. The solution for the parallel-plate geometry considered here is a guided cylindrical slow wave with Hankel function
This configuration closely resembles a leaky-wave antenna, with each of the complementary metamaterial elements serving as a subwavelength resonant radiator. However, there are two significant differences between this metamaterial aperture and a conventional leaky-wave antenna. The first is the periodicity of the metamaterial elements, which is smaller (in relation to the wavelength) than is ever useful in a conventional leaky-wave antenna. The second difference is the use of resonant elements, as is common—though not required—in metamaterials. The use of resonance not only grants us access to the exotic electromagnetic responses metamaterials are known for, but also allows us to use frequency as a convenient parameter by which to index our measurement modes.
In the current section we do not follow explicitly the forward model as discussed in Section 2, where we assumed the fields on the aperture plane are known exactly. The details of the waveguide feed and radiation mechanism of the metamaterial elements—including all near- and far-field interactions between the elements—are well beyond the scope of this paper. Instead, here we calculate the unperturbed field within the parallel-plate guide and use it as the local driving field exciting each of the complementary metamaterial elements.
The details of the actual complementary metamaterial elements are unimportant to the present discussion; as was stated in the introduction, we model each complementary metamaterial element as a radiating dipole [20,37]. For convenience we assume these elements are resonant over the K-band (18–26 GHz) of frequencies and that the metamaterial imager operates over the same frequency range.
The dipole moment can be calculated from its polarizability and the local guided field according to38] and are approximated in the far-field region as 2) and is the impedance of freespace.
It is now evident why frequency can serve as a parameter by which to index the measurement modes. From Eq. (25) we note that by sweeping the polarizability of each dipole changes; in addition, the local field at each dipole can change with as well. From Eq. (24) we see that both polarizability and affect the dipole moments, and these in turn can modify the field pattern with which the array illuminates the scene, calculated according to Eq. (26).
To visualize how the aperture’s modes vary with frequency, we simulate a center-fed, element array with resonance frequencies randomly chosen from the K-band spectrum. The aperture is 20 cm in size with relative dielectric . We calculate and plot in Fig. 3 the dipoles’ magnitude and phase distributions across the aperture at 20 GHz (A,B) and 26 GHz (D,E) assuming elements with -factors of 200. For simplicity, in the current discussion all dipoles are oriented along the axis; since the center probe generates fields with circular phase fronts, the dipole moments are weaker near the axis, where the component of vanishes (A,D). Investigation of the dipoles’ phase distribution (B,E) reveals the guided wave’s circular phase fronts perturbed by the presence of strongly resonating dipoles. From these distributions of dipole moments we calculate and display the fields illuminating a planar area spanning a FOV of at (C,F). The difference in radiation patterns highlights how unique sets of measurement modes can be accessed using only frequency diversity.
5. IMPROVING THE METAIMAGER
Within a frequency-diversity imaging scheme, it is advantageous to utilize array elements with the highest achievable -factor in order to decrease the correlation between the measurement modes. To illustrate this we simulate a center-fed array 48 cm in size with elements, and sweep the elements’ -factor from 50 to 1000. For each value of the -factor we compute the array’s measurement matrix and calculate . To compute each measurement matrix, we sweep through frequency steps across the K-band spectrum, calculating at each frequency step the th field pattern illuminating the same planar area described in Section 4. Since our aperture’s angular-resolution limit is approximated as (where is the aperture size), we calculate across resolution-limited pixels. We then calculate of the canonical measurement matrix as well as of a wavelets-transformed measurement matrix , because natural scenes are often more compressible in the wavelets basis [39,40] (see Section 6 for further details). A plot of versus the -factor is shown in Fig. 4A, where we observe how an increase in the -factor causes to decrease—signifying less correlation between measurements.
Since arbitrarily large -factors are not realistically achievable, we explore another method to improve : alternating the location of the source. We still excite the array using one source at each measurement, and keep the total number of measurements unchanged, but we switch between various source locations. Figure 4B compares when the same array is excited by an increasing number of sources (evenly distributed around the aperture’s center as shown in the figure’s inset) and the -factor is set to a realistically attainable value of 200. It is apparent that we can still increase the orthogonality of our measurements even when limited by the elements’ -factors by alternating the source location: since each location generates a different guided wave across the aperture, moving the source changes the distribution of dipoles across the aperture and modifies the field pattern with which the aperture illuminates the scene.
6. IMAGING OF A 2D SCENE
To investigate the imaging capabilities of the metamaterial aperture, we compare simulated reconstructions using all aperture configurations discussed in Section 5. The simulated scene is a gray scale image of a person holding reflective construction tools (Fig. 5A), downsampled to pixels (Fig. 5B). In the present discussion we are interested in microwave imaging. At these frequencies we expect the metallic tools to reflect far more strongly than the person, and we edit the scene’s colors accordingly (here white pixels correspond to highly reflective surfaces). While in practice speckle will corrupt the reconstruction since different parts of the scene reflect from different depths, here we simplify the reconstruction associated with such a scenario and instead assume each pixel reflects from a single point in its center, and that all pixels lie on the plane.
We simulate measurements according to , but first normalize the norm of the rasterized scene according to3, when the measurements are noiseless it is possible to estimate the scene using a simple matrix inversion of Eq. (19); such a reconstruction is shown in Fig. 5C for the case of . Reflective features from the original scene are recognizable in this approximation, but errors are present due to the fact that and because the measurements are not orthogonal. The approximation worsens when noise is introduced, as shown in Fig. 5D for a SNR of 10 dB. Here we turn to sparsity as a constraint and pose the reconstruction problem as 23,41,42]. Figure 5E shows reconstruction results from the noisy measurements when Eq. (29) was solved using the TwIST algorithm . We can clearly see significant improvements in the scene approximation. In addition, since natural scenes such as ours are compressible in the wavelets basis , we calculate the measurement matrix for scenes in the wavelets basis according to (where is the Haar wavelets-transform matrix ), approximate using 5F, where further improvements are observed. As was explained in Section 5, we can also improve reconstructions by switching the location of the aperture’s source. Figures 5G and 5H depict the compressive-sensing noisy reconstructions in the canonical and wavelets basis, respectively, for an aperture with a -factor of 200 fed by six alternating sources. In summary, we calculate and plot the MSEs of all reconstructions as a function of -factor and the number of alternating sources (for a constant of 200) in Fig. 6. We observe that the wavelets-basis reconstructions outperform both the canonical compressive-sensing reconstructions and the matrix-inversion reconstructions. Also, in agreement with the average mutual-coherence trends discussed in Section 5, reconstruction performance improves with -factor and the number of sources.
7. IMAGING OF A 3D SCENE
In a similar fashion to the 2D scene reconstruction described above, we can demonstrate the aperture’s ability to image 3D scenes. For the purposes of this demonstration, our 3D target is represented using the standard tessellation language (STL) format, which describes the surface geometry of an object as a collection of triangular facets. We consider only facets facing the aperture to be reflective, and define the reflectivity of a facet at when it is illuminated from the origin to be
Next, we discretize the volume of interest into cells in angle and range. The range-resolution of an aperture is , where is the speed of light and BW is the operation bandwidth , and the angular-resolution of a given aperture was discussed in Section 5. We define the scattering from each cell to be the sum of all reflectivities associated with the facets it contains, and as was done for the 2D scene assuming the cell reflects from a single point at its center.
As before, each row in corresponds to the rasterized fields radiating from the aperture to the center of each cell. However, calculating the fields to every cell in the 3D volume can be computationally taxing. Instead, we calculate the fields across the desired FOV at a constant distance from the aperture’s center, and assume that across the same angles, fields at a second distance can be computed from29). We recognize that the scene discretization has alleviated potentially adverse effects due to speckle, which remains an active topic of research.
We illustrate this setup in Fig. 7A, where an STL representation of a human is illuminated by an aperture 3.25 m away. The triangular facets used to describe the shape’s surface are shown in Fig. 7B. Figure 7C depicts the discretized target when the volume surrounding the target was discretized into cells—less than the full SBP but more manageable computationally. Here higher scattering densities are marked with darker shades, and nonreflecting cells are transparent. We compute the measurement matrix across all cells assuming the aperture is per side with elements having a -factor of 200, and we use six source locations—sweeping each through 1000 frequency steps. Although the total number of measurements is just more than one-third the number of discretized cells to be reconstructed, the compressive-sensing algorithm utilizing our sparsity prior successfully reconstructs the scene. The volumetric scattering density reconstructions for noiseless measurements and in the presence of noise with 5 dB SNR are shown in Figs. 7D and 7E, respectively, where for visualization purposes we have thresholded the displayed pixels. In both reconstructions, the person is clearly identifiable.
We have introduced a computational imaging framework appropriate to a variety of single-pixel coherent imagers, and applied it to a specific aperture implementation we termed the metaimager—a 2D guided-wave aperture radiating via an array of complementary metamaterial elements. We have modeled each element as a radiating dipole and showed how their dispersion allows the metaimager to control its field patterns through frequency diversity. Furthermore, by randomly distributing the resonance frequencies of its elements, we demonstrated the metaimager can illuminate a scene with random field patterns well suited for compressive sensing. We have discussed how to improve the metaimager’s imaging capabilities by increasing its element’s -factor as well as by switching between various source locations. Lastly, we have presented simulations of 2D and 3D scene reconstructions demonstrating the imaging capabilities of the proposed metaimager. Extending the work presented in this paper to model the dipole interaction instead of using our simplified assumption of noninteracting dipoles will likely improve the accuracy of the predicted aperture radiation pattern. In addition, in our reconstructions each pixel/voxel was assumed to reflect like a single point from its center; speckle-related issues were not addressed here and can be tackled by future extension of the work as well.
This work was supported by the Air Force Office of Scientific Research (AFOSR) (Grant No. FA9550-09-1-0562) and the Department of Homeland Security (DHS) (Grant No. HSHQDC-XX-12-C-00049). The authors also thank Professor Guillermo Sapiro for providing comments on the manuscript regarding the coherence metric.
1. D. J. Brady, Optical Imaging and Spectroscopy (Wiley-OSA, 2009).
2. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009). [CrossRef]
3. C. F. Cull, D. A. Wikner, J. N. Mait, M. Mattheiss, and D. J. Brady, “Millimeter-wave compressive holography,” Appl. Opt. 49, E67–E82 (2010). [CrossRef]
4. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93, 121105 (2008). [CrossRef]
5. W. Freese, T. Kampfe, E. B. Kley, and A. Tunnermann, “Design of binary subwavelength multiphase level computer generated holograms,” Opt. Lett. 35, 676–678 (2010). [CrossRef]
6. U. Levy, H. C. Kim, C. H. Tsai, and Y. Fainman, “Near-infrared demonstration of computer-generated holograms implemented by using subwavelength gratings with space-variant orientation,” Opt. Lett. 30, 2089–2091 (2005). [CrossRef]
7. S. Larouche, Y. J. Tsai, T. Tyler, N. M. Jokerst, and D. R. Smith, “Infrared metamaterial phase holograms,” Nat. Mater. 11, 450–454 (2012). [CrossRef]
8. H. T. Chen, W. J. Padilla, J. M. O. Zide, A. C. Gossard, A. J. Taylor, and R. D. Averitt, “Active terahertz metamaterial devices,” Nature 444, 597–600 (2006). [CrossRef]
9. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013). [CrossRef]
10. W. Menzel, “New traveling-wave antenna in microstrip,” AEU Int. J. Electron. Commun. 33, 137–140 (1979).
11. A. Oliner and K. S. Lee, “Microstrip leaky wave strip antennas,” in IEEE International Antennas and Propagation Symposium Digest, Philadelphia, Pennsylvania, 1986, p. 443.
12. D. R. Jackson, C. Caloz, and T. Itoh, “Leaky-wave antennas,” Proc. IEEE 100, 2194–2206 (2012). [CrossRef]
13. A. Sutinjo, M. Okoniewski, and R. H. Johnston, “Radiation from fast and slow traveling waves,” IEEE Antennas Propag. Mag. 50(4), 175–181 (2008). [CrossRef]
14. C. A. Balanis, in Modern Antenna Handbook (Wiley, 2008), Chap. 7.
15. E. J. Candès, “Compressive sampling,” in Proceedings of the International Congress of Mathematicians, Madrid, August 22–30, 2006 (invited lectures, 2006).
16. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory 52, 1289–1306 (2006). [CrossRef]
17. R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Process. Mag. 24(4), 118–121 (2007). [CrossRef]
18. J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 14–20 (2008). [CrossRef]
19. B. H. Fong, J. S. Colburn, J. J. Ottusch, J. L. Visher, and D. F. Sievenpiper, “Scalar and tensor holographic artificial impedance surfaces,” IEEE Trans. Antennas Propag. 58, 3212–3221 (2010). [CrossRef]
20. C. Rockstuhl, C. Menzel, S. Muhlig, J. Petschulat, C. Helgert, C. Etrich, A. Chipouline, T. Pertsch, and F. Lederer, “Scattering properties of meta-atoms,” Phys. Rev. B 83, 245119 (2011). [CrossRef]
21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).
22. E. J. Candesand and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory 51, 4203–4215 (2005). [CrossRef]
23. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]
24. R. Calderbank, S. Howard, and S. Jafarpour, “Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property,” IEEE J. Sel. Top. Signal Process. 4, 358–374 (2010). [CrossRef]
25. J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Process. 18, 1395–1408 (2009). [CrossRef]
26. M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Signal Process. 55, 5695–5702 (2007). [CrossRef]
27. N. Landy, J. Hunt, and D. R. Smith, “Homogenization of guided wave metamaterials,” Photon. Nanostruct.—Fundam. Applic. (to be published).
28. F. Falcone, T. Lopetegi, M. A. G. Laso, J. D. Baena, J. Bonache, M. Beruete, R. Marques, F. Martin, and M. Sorolla, “Babinet principle applied to the design of metasurfaces and metamaterials,” Phys. Rev. Lett. 93, 197401 (2004). [CrossRef]
29. F. Martin, J. Bonache, F. Falcone, M. Sorolla, and R. Marques, “Split ring resonator-based left-handed coplanar waveguide,” Appl. Phys. Lett. 83, 4652–4654 (2003). [CrossRef]
30. J. Martel, R. Marques, F. Falcone, J. D. Baena, F. Medina, F. Martin, and M. Sorolla, “A new LC series element for compact bandpass filter design,” IEEE Microw. Wirel. Compon. Lett. 14, 210–212 (2004). [CrossRef]
31. E. Jarauta, M. A. G. Laso, T. Lopetegi, F. Falcone, M. Beruete, J. D. Baena, A. Marcotegui, J. Bonache, J. Garcia, R. Marques, and F. Martin, “Novel microstrip backward coupler with metamaterial cells for fully planar fabrication techniques,” Microw. Opt. Technol. Lett. 48, 1205–1209 (2006). [CrossRef]
32. K. Afrooz, A. Abdipour, and F. Martin, “Broadband bandpass filter using open complementary split ring resonator based on metamaterial unit-cell concept,” Microw. Opt. Technol. Lett. 54, 2832–2835 (2012). [CrossRef]
33. R. Liu, Q. Cheng, T. Hand, J. J. Mock, T. J. Cui, S. A. Cummer, and D. R. Smith, “Experimental demonstration of electromagnetic tunneling through an epsilon-near-zero metamaterial at microwave frequencies,” Phys. Rev. Lett. 100, 023903 (2008). [CrossRef]
34. Q. Cheng, R. P. Liu, J. J. Mock, T. J. Cui, and D. R. Smith, “Partial focusing by indefinite complementary metamaterials,” Phys. Rev. B 78, 121102 (2008). [CrossRef]
35. R. P. Liu, X. M. Yang, J. G. Gollub, J. J. Mock, T. J. Cui, and D. R. Smith, “Gradient index circuit by waveguided metamaterials,” Appl. Phys. Lett. 94, 073506 (2009). [CrossRef]
36. Q. Cheng, H. F. Ma, and T. J. Cui, “Broadband planar Luneburg lens based on complementary metamaterials,” Appl. Phys. Lett. 95, 181901 (2009). [CrossRef]
37. T. H. Hand, J. Gollub, S. Sajuyigbe, D. R. Smith, and S. A. Cummer, “Characterization of complementary electric field coupled resonant surfaces,” Appl. Phys. Lett. 93, 212504 (2008). [CrossRef]
38. C. A. Balanis, Advanced Engineering Electromagnetics (Wiley, 1989).
39. B. E. Usevitch,” A tutorial on modern lossy wavelet image compression: foundations of JPEG 2000,” IEEE Signal Process. Mag. 18(5), 22–35 (2001). [CrossRef]
40. R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” IEEE Trans. Inf. Theory 56, 1982–2001 (2010). [CrossRef]
41. E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006). [CrossRef]
42. D. L. Donoho, “For most large underdetermined systems of equations, the minimal l(1)-norm near-solution approximates the sparsest near-solution,” Commun. Pure Appl. Math. 59, 907–934 (2006). [CrossRef]
43. J. M. Bioucas-Dias and M. A. Figueiredo, “A new twIst: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007). [CrossRef]
44. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. (Prentice-Hall, 2002).
45. M. A. Richards, Fundamentals of Radar Signal Processing (McGraw-Hill, 2005).