Conventional color imaging requires absorptive color-filter arrays, which exhibit low light transmission. Here, we replace the absorptive color-filter array with a transparent diffractive-filter array (DFA) and apply computational optics techniques to enable color imaging with a sensitivity that is enhanced by a factor as high as 3.12. The DFA diffracts incident light onto a conventional monochrome sensor array to create intensity distributions that are wavelength dependent. By first calibrating these wavelength-dependent intensity distributions and then applying computational techniques, we demonstrate single-shot hyperspectral imaging and absorption-free color imaging.
© 2015 Optical Society of America
Color imaging provides information in the spectral domain with very important applications in our daily life, scientific research, industrial processes, etc. Color can represent critical information such as body temperature, material composition, and aesthetics. Conventional cameras employ an absorptive color-filter array (also referred to as the Bayer filter) to determine the color of spatial pixels ; it is usually composed of an array of square color subpixels placed over a sensor array such that one color subpixel is aligned to one sensor pixel. Each color subpixel transmits one primary color (red, green, or blue), while absorbing the rest. Therefore, its overall light transmission is low. Lower transmission leads to compromised light sensitivity. Furthermore, such color-filter arrays require multiple aligned lithography steps for their manufacture, which can be cumbersome. Here, we demonstrate a transparent diffractive-filter array that can be easily and inexpensively fabricated to achieve color imaging with very little absorption loss.
Recently, new filter designs have been proposed to overcome certain limitations of conventional Bayer filters [2,3]. Most of these designs aim to enhance the color accuracy by tuning the transmitted spectral bands via nano- or microstructures. Plasmonics-based color filters suffer from decreased light transmission due to parasitic absorption of the required metal layers [4–10]. In addition, these require very precise nanofabrication of subwavelength structures, which can be challenging and experience difficulties in extension to mass production. Alternative filters that utilize a variety of optical resonance effects have also been proposed. However, these filters exhibit very limited bandwidths [11–13]. Some of the devices also require multiple lithography steps . A recent approach introduced complex nanophotonic deflectors above the sensor array, which demonstrated a two-fold improvement in light sensitivity . However, the required nanostructures have large aspect ratios and thus are difficult to fabricate and expensive to manufacture.
The concept of a coded aperture was previously explored to construct a spectral imager . Although it shows reasonable spectral resolution and image quality, this technique requires a coded (absorptive) aperture and a dispersion element (prism) to generate multispectral images. Extra relay lenses are also needed. An absorptive aperture clearly limits photon throughput, and hence, reduces sensitivity. Recently, commercial hyperspectral sensors have also been introduced . In these, spectral selectivity is achieved via complex Fabry–Perot resonators integrated on top of the CMOS sensor. Not only does this technology require precise alignment between the filter array and the sensor array, but the filters themselves require expensive multilayer deposition techniques. Most importantly, the overall light transmission is greatly reduced due to the spectral selectivity of each filter in the array, and consequently sensitivity is degraded.
In this paper, we overcome the limitations of all previous approaches by utilizing a fully transparent diffractive-filter array (DFA) that not only enhances light sensitivity by as much as , but is also significantly simpler to mass manufacture. Specifically, we replace the conventional Bayer filter by a multilevel DFA atop the conventional sensor array as shown in Fig. 1(a). Light incident on the DFA diffracts and creates an intensity pattern on the underlying sensor array. We design the DFA such that the diffracted intensity pattern of each wavelength is unique. Then, we calibrate the response of the DFA to each wavelength, which we refer to as the spatial-spectral point-spread function. Finally, we apply computational techniques to recover the color information of any unknown incident illumination. The key advantages of our approach are: (1) the DFA can be completely transparent, which allows all the light to be utilized for imaging and thus improves sensitivity; (2) the DFA can be easily fabricated using single-step grayscale lithography and mass manufactured using imprinting techniques [18,19]; (3) it has large tolerance to fabrication inaccuracy, since calibration comes after the filter is patterned and fixed; (4) minimal alignment is necessary between the DFA and the underlying sensor array; (5) only one optical element (the DFA) is introduced to replace the Bayer filter; (6) the technique can be applied to any conventional sensor array (CMOS or CCD); and (7) the technique is easily extended to multi- and hyperspectral imaging.
2. PRINCIPLE OF OPERATION
The basic schematic of our approach is shown in Fig. 1(a). The DFA is composed of a periodic unit cell. In our implementation, this unit cell is composed of an array of squares, each of size . For our initial demonstration, the depth of each such square is randomly assigned. The DFA is placed at a distance from the sensor array. In our current implementation, one spatial pixel of the image is composed of an array of sensor pixels, which in turn corresponds to one DFA unit cell. We utilized a commercial monochrome sensor with a pixel size (Model #: DMM22BUC03-ML, The Imaging Source). A photograph of the final assembly is shown in Fig. 1(b). Additional details of the DFA-sensor assembly are included in Supplement 1. Grayscale lithography was used to pattern these multilevel diffractive optics in a single step [20,21]. It was made in positive photoresist (Shipley 1813) spin-coated onto a fused silica substrate. Exposure dose was modulated in grayscale to generate different depths after development. In this case, the depth of each square in the DFA was controlled between 0 and 1.2 μm. Specific details of the fabrication process are included in Supplement 1. An optical micrograph and atomic force micrograph of different portions of the fabricated DFA are shown in Figs. 1(c) and 1(d). Periodicity of and multilevel height distribution within one unit cell are clearly shown.
We first characterized the device by measuring the diffracted light intensity distribution on the sensor array as a function of wavelength. We call this data the spatial-spectral point-spread function (SS-PSF) of the DFA. This is analogous to our previous work in computational spectroscopy . We built a scanning spectrometer by placing the single-mode fiber input to a conventional spectrometer (Ocean Optics Jaz) on an automated two-axis stage (Thorlabs). The DFA sensor assembly was illuminated by collimated white light from a supercontinuum source (SuperK Compact, NKT Photonics). The scan axes of the stage were carefully aligned to the axes of the DFA sensor assembly. The distance between the DFA and the sensor was also carefully set using a manual micrometer stage with a precision of 10 μm, which was shown to be sufficient from an analysis of the depth of focus of the DFA. Further details of alignment are included in Supplement 1. Then, we captured a spectrum for each position of the stage, which resulted in a 3D SS-PSF (one wavelength dimension and two spatial dimensions) matrix. This data was captured for three different values of . Exemplary images () at five wavelengths and the three values of are shown in Fig. 2(a).
Note that, as the wavelength changes, the diffracted image also changes. The spectral resolution of our technique relies on the decorrelation of the diffracted images at closely spaced wavelengths. We can quantify this effect via a correlation function that is calculated as a function of the wavelength spacing, [21–23]:2(b) is averaged over the entire space . It is well known that this correlation function is also dependent on . The spectral resolution is then defined as the wavelength spacing at which the image correlation, , is 0.5. Thereby, we can plot the spectral resolution as a function of as shown in the inset of Fig. 2(b). As expected, the spectral resolution increases with increasing . However, as discussed later, the crosstalk between spatial pixels also increases with increasing . Therefore, an optimal choice of is necessary. The spectral resolution of the 2D DFA studied here is lower than that of the 1D computational spectrometer designs  due to the smaller gap and the fewer pixels in each unit cell of the DFA. However, it is sufficient for accurate color reconstruction, which will be shown later.
The diffracted intensity distribution of one image pixel [ sensor pixels, ] can be modeled as , where is the compound photon flux spectrum, which is an element-wise multiplication of the unknown photon flux spectrum and the sensor’s quantum efficiency [21,24]. Solving for from is an inverse problem. This problem can be solved by minimizing the residual norm . Here, we present two methods to solve this inverse problem. One is via a modified version of the iterative direct-binary-search (DBS) algorithm . DBS has been successfully implemented for optimizing nanophotonic devices [25–27], phase masks for 3D lithography , spectral splitters and concentrators [29,30], and computational spectrometers . The second approach is based on singular-value-decomposition of the system matrix, , and regularization of the inverse problem . This is a faster algorithm and is also less sensitive to noise. In this case, the optimal solution is mathematically represented as the weighted linear combination of the singular vectors of matrix in the spectral domain. Details of both reconstruction techniques are described in Supplement 1.
To demonstrate preliminary color reconstruction, we placed various color filters (Nikon) one at a time in the path of the collimated white light illuminating the DFA-sensor assembly. The results are summarized in Fig. 3 for five colors: blue, green, red, yellow, and purple. The calculated spectral resolutions were 53, 45, and 29 nm for , 0.5, and 1.5 mm, respectively [see inset in Fig. 2(b)]. Both the DBS method and regularization were used for spectrum reconstruction. The reconstructed spectra were compared to the spectra measured using a conventional spectrometer (shown in black). Noise present in the reconstructed spectra does not affect the color values significantly. RGB color values were calculated from the reconstructed spectra by integrating over the corresponding bands: 450–500 nm for blue (B), 500–580 nm for green (G), and 580–700 nm for red (R). They are then quantized into 8-bit (256 levels). The minimum wavelength was limited to 450 nm by the supercontinuum source. The reconstructed color values agree very well with the actual color values (estimated from the measured spectra) and exhibit average errors of 5.62%, 7.50%, and 7.11% in terms of the 256 levels for , 0.5 mm, and 1.5 mm, respectively. These are all well within the 10% color-error threshold and are acceptable for visual perception. A more rigorous standard for color accuracy test, delta (or ) values, based on CIE94 definition, are also summarized in Fig. 3. The averaged over the five exemplary colors for gap is 2.42, which is close to the previously proposed Just Noticeable Difference value of .
In order to demonstrate color imaging, we used the image of a rainbow printed on a transparency by a high-resolution printer. First, a conventional color sensor (Model #: DFM22BUC03-ML, The Imaging Source) was used to capture a reference image as shown in Fig. 4(a). For fair comparison, the monochrome and the color sensor chips were identical, the only difference being the presence of the Bayer filter on the color sensor. The illumination system as well as the exposure times (14 ms) were kept the same for both the conventional color sensor and our DFA sensor. In our preliminary experiments, no lens was incorporated in the system (no magnification), and the sample was placed as close as possible to either the front surface of the conventional color sensor or to the DFA substrate. In practice, there was a small gap of . For this experiment, we chose .
The raw monochrome image corresponding to is shown in Fig. 4(b). The corresponding color image reconstructed using DBS algorithm and regularization are shown in Figs. 4(c) and 4(d), respectively. This latter image shows somewhat less numerical noise as expected . By applying a simple filter in the Fourier domain, these images are denoised. The raw images before denoising are shown in Supplement 1. Both the reconstructed images are significantly brighter than the reference image. One of the constraints of DFA-based color reconstruction is the crosstalk that occurs between neighboring spatial pixels due to the diffraction of light. A simple approach to reduce the impact of crosstalk is to undersample the raw monochrome data and use the undersampled images for reconstruction. In order to undersample the image, we need to first estimate the spatial extent of the crosstalk using our diffraction model [29,32]. We calculated that crosstalk affects an area of ( DFA unit cells), ( DFA unit cells), and ( DFA unit cells) at , 0.5, and 1.5 mm, respectively. Since here we used , we undersampled the monochrome image at every fifth unit cell of the DFA, which is a sufficient spatial resolution for the rainbow image used here. The undersampling was applied to the raw image used for both the DBS method as well as regularization. It was undersampled from sensor pixels to sensor pixels [Fig. 4(b)]. Since each DFA unit cell corresponds to sensor pixels [see Fig. 1(a)], this leads to reconstruction of image pixels. After de-noising, we further applied a simple interpolation algorithm to extend the reconstructed image using regularization [Fig. 4(d)] to the same size as that of the reference image ( image pixels), and the resulting image is shown in Fig. 4(e). This reconstructed color image is of the same quality as the reference image, but is considerably brighter as no absorptive color-filter array is used.
Interestingly, our imaging architecture can also be used as a single-shot hyperspectral imager. To illustrate this, the intensity distribution maps at five wavelengths are plotted in Fig. 4(f). These are normalized as indicated by the color bar. For clear illustration, we identify five color ribbons of the rainbow, labeled in Figs. 4(a), 4(c), 4(d), and 4(f). The far end of the spectrum is close to infrared; therefore, the map looks uniform and is thus excluded from our analysis. The blue ribbon (#2) and the green ribbon (#3) have signals only from the channel and channel, respectively. The purple ribbon (#1) is composed of both blue () and red (). The yellow region (#4) receives contributions from , 580, and 630 nm channels. And the red part (#5) consists of mostly signals and some signals. Here, the bandwidth to spectral resolution ratio is .
The conventional Bayer filter is absorptive and therefore has low light transmission. In the RGB three-color scheme, each subpixel filter only lets one color transmit, while absorbing the other two colors. As a result, the final photon throughput cannot surpass 1/3 (1/2 for green as two green color subpixels are used). On the other hand, our DFA is transparent and all the photons could be utilized for imaging. Therefore, a threefold improvement in photon utilization rate as well as sensitivity is theoretically expected. To experimentally quantify the light-sensitivity enhancement of the DFA-based color sensor over the Bayer-filter-based color sensor, we averaged the signal intensity over the measured image as a function of exposure time for both devices. As mentioned earlier, the sensor chip and the experimental conditions were identical for both devices. For the 8-bit sensors used in our experiments, the intensity values range from 0 to 255. At an exposure time of 20 ms, the DFA sensor assembly was saturated with average image intensity of 255. The measured values are plotted in Fig. 5(a). The peak enhancement in light sensitivity of 3.12 was measured at an exposure time of . An enhancement factor averaged over exposure times of 1–20 ms is 2.67. This is higher than what was previously reported with a more complex device .
As discussed previously, the spatial resolution of our DFA sensor assembly is constrained primarily by the impact of crosstalk, which in turn is determined by the distance between the DFA and the sensor. For higher spectral resolution, one prefers larger . However, increasing increases the area of crosstalk, which reduces the spatial resolution. We experimentally measured the modulation transfer function (MTF) to quantify this effect by imaging an object composed of periodic opaque (black) lines at various periods printed on a transparency. The object was again placed in close proximity to the DFA substrate as before. The MTF is calculated as a function of the spatial frequency (number of lines per unit length in cycles/mm)  via the relative contrast, , where and are the visibility (contrast) of the image and object, respectively. The visibility or contrast is defined as5(b). The maximum spatial frequency () that was experimentally measured was limited by the resolution of the printer used to print the object pattern onto the transparency. The spatial resolution of our DFA-sensor assembly is then given by the cutoff spatial frequencies, which correspond to a resolution of 95, 143, and 380 μm for , 0.5, and 1.5 mm, respectively. These measured values agree well with the numerical predictions based on the crosstalk effect (see Supplement 1). Simulations of the far-field diffraction patterns also suggest an almost linear relationship between the spatial resolution and the gap [see the inset of Fig. 5(b)]. This is because the diffraction angle is fixed for a fixed structure and wavelength. Here, the spatial resolution is determined by the minimum distance between two DFA unit cells when they are illuminated independently, and the interactions of their diffracted fields become trivial. Note that higher spatial resolution may be achieved either by designing a better DFA or by applying computational techniques to compensate for crosstalk.
All our experiments were conducted by assuming on-axis illumination. It is well known that oblique incidence will shift the diffraction pattern. We characterized the acceptance angle of the DFA sensor assembly and the experimental details can be found in Supplement 1. The DFA sensor works best for on-axis illumination, as expected. Nevertheless, it is possible to calibrate the impact of off-axis illumination and reconstruct color images. However, in this proof-of-principle study, such techniques were not implemented.
Noise can be a limiting factor in all imaging systems. The color deviations observed in the reconstructed images [Figs. 4(c) and 4(d)] are likely ascribed to alignment errors, electronic noise, computational errors, and fabrication errors. Fortunately, they are not strong enough to obscure the image quality. Numerical studies on color accuracy as a function of noise level predict that a signal-to-noise ratio (SNR) better than results in a color error of less than 10% (see Supplement 1).
5. SMALLER SENSOR PIXEL
All the experiments and discussions so far are based on the 6 μm sensor pixel. However, there is an emerging trend in both research and industry to reduce the size of the sensor pixels. Sensors with 1.67 μm or even smaller pixels are commercially available and widely used. Such pixels suffer even more from poor light sensitivity. Here we show, using careful numerical studies, that our technology can drastically improve the sensitivity of such sensors as well.
First, we designed a new DFA which has and squares in each unit cell. Each unit cell corresponds to one spatial pixel of the image and covers sensor pixels with pixel size of . This DFA also has a quasi-random topography, as depicted in Fig. 6(a). The correlation function plotted in Fig. 6(b) derived from the simulated SS-PSF at a gap of indicates a spectral resolution of 44 nm, which leads to a bandwidth to spectral resolution ratio of . A test pattern is numerically synthesized and successfully reconstructed by regularization without any undersampling [Figs. 6(c) and 6(d)]. The spectrum of each point in the original object is numerically reconstructed by the pseudospectra of the R, G, and B channels. Details are included in Supplement 1. As anticipated, the DFA, together with the regularization algorithm, works well for the 1.67 μm sensor pixel except at the boundaries of abrupt color change, where crosstalk smears color accuracy. Scalar diffraction calculation estimates the lateral spread of the crosstalk (or spatial resolution) to be . This is approximately three image pixels in our configuration, since one DFA unit cell is . Examples at five small areas ( image pixels or equivalently area) are summarized in Fig. 6(e). At the boundaries of spatial color change (areas #1, 2, and 3), severe color distortions are observed due to the crosstalk effect. These “transition regions” span around three image pixels, which is the same as that predicted by scalar diffraction computation. However, in the areas of uniform color (areas #4 and 5), our reconstructions demonstrate negligible distortion and noise. The absolute error between reconstruction and true images averaged over the entire image space is well below 5%. For this object of image pixels, it takes roughly 30 s to complete reconstruction by regularization without implementing any parallel computation techniques on a Lenovo W540 laptop (Intel i7-4700MQ CPU at 2.40 GHz and 16.0 GB RAM) for simplicity. Since each image pixel is independent without considerations of the crosstalk effect, the reconstruction algorithm can be highly parallelized and thus significantly accelerated using either multicore CPU or GPU chips and by storing calibration data in shared memory in advance. It is noteworthy that the algorithm can also be implemented directly on a conventional image signal processor in the future.
Another critical benefit of using smaller sensor pixels is that the gap can be significantly decreased, which is important for a compact sensor. In practice, this gap is limited by the thickness of the protective cover glass on the sensor chip. In principle, the DFA can be fabricated on the sensor chip directly [4,15].
In our approach, the spatial and spectral resolutions are traded off against one another via the gap . Since both high spatial and spectral resolutions are desired in reality, we define a new parameter–resolution product (RP), which is the product of spatial and spectral resolutions. Figure 7(a) shows the simulated spatial and spectral resolutions as a function of for the 1 μm-square DFA [Fig. 6(a)]. Again, we assume uniform planewave illumination and the sample is in proximity to the DFA substrate. Figure 7(b) plots the relationship between RP and . Because the spectral resolution is a nonlinear function of (increases rapidly at smaller but decreases slowly at larger ), while the spatial resolution is approximately a linear function of , there exists an optimum value where the RP is minimized. In this case, the optimal point is at . For this configuration, this gives the best choice of at which both spatial and spectral resolutions are optimal.
We demonstrated a new color sensor that utilizes a transparent diffractive-filter array and computational methods. Our color sensor transmits significantly more light than conventional Bayer sensors and we measured an increase in light sensitivity as high as 3.12. We applied two different computational techniques for color reconstruction. Diffractive filters incur crosstalk, which limits the tradeoff between spectral and spatial resolutions. We experimentally demonstrated a spatial resolution of 90 μm and spectral resolution of 53 nm. Improvements in computational techniques by compensating for crosstalk as well as oblique illumination can improve these tradeoffs in the future. Simulations show its potential in sensors with pixel sizes of . Our technique can also be used for single-shot hyperspectral imaging.
National Aeronautics and Space Administration (NASA) (NNX14AB13G); Office of Naval Research (ONR) (55900526); U.S. Department of Energy (DOE) (EE0005959).
The authors would like to thank Tom Slowik at the University of Utah Machine Shop for machining the camera holder.
See Supplement 1 for supporting content.
1. B. E. Bayer, “Color imaging array,” U.S. Patent 3,971,065 (July 20, 1976).
2. Y. Yu, L. Wen, S. Song, and Q. Chen, “Transmissive/reflective structural color filters: theory and applications,” J. Nanomater. 2014, 212637 (2014).
3. N. Dean, “Colouring at the nanoscale,” Nat. Photonics 10, 15–16 (2015).
4. P. B. Catrysee and B. A. Wandell, “Integrated color pixels in 0.18-μm complementary metal oxide semiconductor technology,” J. Opt. Soc. Am. A 20, 2293–2306 (2003). [CrossRef]
5. K. Kumar, H. Duan, R. S. Hegde, S. C. W. Koh, J. N. Wei, and J. K. W. Yang, “Printing colour at the optical diffraction limit,” Nat. Nanotechnol. 7, 557–561 (2012). [CrossRef]
6. Q. Chen, D. Das, D. Chitnis, K. Walls, T. D. Drysdale, S. Collins, and D. R. S. Cumming, “A CMOS image sensor integrated with plasmonic colour filters,” Plasmonics 7, 695–699 (2012). [CrossRef]
7. L. Lin and A. Roberts, “Angle-robust resonances in cross shaped aperture arrays,” Appl. Phys. Lett. 97, 061109 (2010). [CrossRef]
8. E. Laux, C. Genet, T. Skauli, and T. W. Ebbesen, “Plasmonic photon sorters for spectral and polarimetric imaging,” Nat. Photonics 2, 161–164 (2008). [CrossRef]
9. S. Yokogawa, S. P. Burgos, and H. A. Atwater, “Plasmonic color filters for CMOS image sensor applications,” Nano Lett. 12, 4349–4354 (2012). [CrossRef]
10. S. P. Burgos, S. Yokogawa, and H. A. Atwater, “Color imaging via nearest neighbor hole coupling in plasmonic color filters integrated onto a complementary metal-oxide semiconductor image sensor,” ACS Nano 7, 10038–10047 (2013). [CrossRef]
11. K. Walls, Q. Chen, J. Grant, S. Collins, D. R. S. Cumming, and T. D. Drysdale, “Narrowband multispectral filter set for visible band,” Opt. Express 20, 21917–21923 (2012). [CrossRef]
12. A. F. Kaplan, T. Xu, and L. J. Guo, “High efficiency resonance based spectrum filters with tunable transmission bandwidth fabricated using nanoimprint lithography,” Appl. Phys. Lett. 99, 143111 (2011). [CrossRef]
13. M. J. Uddin and R. Magnusson, “Efficient guided-mode resonant tunable color filters,” IEEE Photon. Technol. Lett. 24, 1552–1554 (2012). [CrossRef]
14. L. Frey, P. Parrein, J. Raby, C. Pelle, D. Herault, M. Marty, and J. Michailos, “Color filters including infrared cut-off integrated on CMOS image sensor,” Opt. Express 19, 13073–13080 (2011). [CrossRef]
15. S. Nishiwaki, T. Nakamura, M. Hiramoto, T. Fujii, and M. Suzuki, “Efficient colour splitters for high-pixel-density image sensors,” Nat. Photonics 7, 240–246 (2013). [CrossRef]
16. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008). [CrossRef]
17. M. Jayapala, A. Lambrechts, N. Tack, B. Geelen, B. Masschelein, and P. Soussan, “Monolithic integration of flexible spectral filters with CMOS image sensors at wafer level for low cost hyperspectral imaging,” in International Image Sensor Workshop (Snowbird, 2013).
18. L. J. Guo, “Recent progress in nanoimprint technology and its applications,” J. Phys. D 37, R123–R141 (2004). [CrossRef]
19. M. D. Galus, E. Moon, H. I. Smith, and R. Menon, “Replication of diffractive-optical arrays via photocurable nanoimprint lithography,” J. Vac. Sci. Technol. B 24, 2960–2963 (2006). [CrossRef]
20. K. Reimer, H. J. Quenzer, M. Jurss, and B. Wagner, “Micro-optic fabrication using one-level gray-tone lithography,” Proc. SPIE 3008, 279–288 (1997). [CrossRef]
21. P. Wang and R. Menon, “Computational spectrometer based on a broadband diffractive optic,” Opt. Express 22, 14575–14587 (2014). [CrossRef]
22. B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013). [CrossRef]
23. B. Redding, S. M. Popoff, and H. Cao, “All-fiber spectrometer based on speckle pattern reconstruction,” Opt. Express 21, 6584–6600 (2013). [CrossRef]
24. P. Wang and R. Menon, “Computational spectroscopy via singular-value-decomposition and regularization,” Opt. Express 22, 21541–21550 (2014). [CrossRef]
25. P. Wang and R. Menon, “Optimization of periodic nanostructures for enhanced light-trapping in ultra-thin photovoltaics,” Opt. Express 21, 6274–6285 (2013). [CrossRef]
26. B. Shen, P. Wang, R. Polson, and R. Menon, “An ultra-high efficiency metamaterial polarizer,” Optica 1, 356–360 (2014). [CrossRef]
27. B. Shen, P. Wang, R. Polson, and R. Menon, “An integrated-nanophotonics polarization beamsplitter with 2.4 × 2.4 μm2 footprint,” Nat. Photonics 9, 378–382 (2015). [CrossRef]
28. P. Wang and R. Menon, “Optical microlithography on oblique and multiplane surfaces using diffractive phase masks,” J. Micro/Nanolithogr. MEMS, MOEMS 14, 023507 (2015). [CrossRef]
29. P. Wang, J. A. Dominguez-Caballero, D. J. Friedman, and R. Menon, “A new class of multi-bandgap high-efficiency photovoltaics enabled by broadband diffractive optics,” Prog. Photovoltaics 23, 1073–1079 (2015). [CrossRef]
30. P. Wang, C. G. Ebeling, J. Gerton, and R. Menon, “Hyper-spectral imaging in scanning-confocal-fluorescence microscopy using a novel broadband diffractive optic,” Opt. Commun. 324, 73–80 (2014). [CrossRef]
31. M. Mahy, L. Van Eycken, and A. Oosterlinck, “Evaluation of uniform color spaces developed after the adoption of CIELAB and CIELUV,” Color Res. Appl. 19, 105–121 (1994).
32. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company, 2005).