## Abstract

Recent progress has made matalenses a reality, with many publications relating to methods of implementation and performance evaluation of these elements. Basic metalens function is similar to that of a continuous (kinoform) diffractive lens, but the advantage is that they can be manufactured as a binary component. A significant limitation of metalenses, is its strong chromatic aberration. Recently there has been some success in correcting metalens chromatic aberration, albeit at the expense of transmission efficiency towards the desired diffraction order. Clearly, there is a tradeoff between parameters such as spectral bandwidth and spatial resolution. Hence, a major goal of this paper is to set up a metric for evaluation of metalens performance, allowing fair comparison of novel metalens technologies, such as achromatic metalenses, in terms of optical performance. Furthermore, we explore possibilities for practical use of non-chromatically corrected metalenses in polychromatic applications, by optimizing the metalens parameters. It is our hope that the current manuscript will serve as a guide for the design and evaluation of metalenses for practical applications.

© 2017 Optical Society of America

## 1. Introduction

Dielectric metasurfaces are a subject of extensive research over the years [1–12]. Recent work has focused on metalenses as potential candidates for replacing conventional lenses in miniature imaging system [13–16]. The advantages of this are in miniaturizing the imaging system, high level of integration, flexibility in the design of optical functionalities and adapting manufacturing to standard semi-conductor processes.

The issue of correcting a metalens over a significant field-of-view for a narrow spectral band has been addressed by Arbabi [17]. However, many imaging systems use broad spectrum illumination. In such systems, metalens chromatic aberration is a critical limitation.

Recently, work has been done on correction of metalens chromatic aberration [18–20]. At first, correction was achieved at discrete wavelengths with a cylinderical lens. Recently, correction was achieved over a continuous waveband, with a reflective metalens [21,22]. So far, correction has not been achieved with a “conventional” metalens, i.e. 2D rotationally symmetric operating in transmission. Furthermore, in all cases, the correction was achieved at the expense of efficiency.

The focus of this paper is on non-chromatically corrected metalenses, which are optically equivalent in most aspects to classic diffractive lenses. The question we address is, can such lenses be used for continuous broad spectrum applications (for example outdoor imaging or microscopy with standard incandescent illumination)? If so, what is the optimal spectral band that should be used? In particular, we demonstrate and discuss the tradeoff between the two primary performance metrics, resolution and signal-to-noise ratio (SNR), that exists when choosing metalens design parameters. Naturally, the larger the spectral range and the aperture of the system, the better the SNR will be. However, this comes at the expense of degraded spatial resolution, as a result of chromatic aberration. We will address this tradeoff and provide guidelines for finding the optimal values for these two free parameters, based on system requirements.

The analysis described in this paper is equally applicable to a conventional diffractive lens, and not only a metalens. The reason we focus on metalenses is the strong motivation that exists to integrate such lenses into sensors, resulting from the option for CMOS compatibility. In addition, the attempts to chromatically correct metalenses make it important to provide a metric for comparison, since the chromatic correction comes at the expense of efficiency [16].

In sections 2-5 of this paper, we analyze systems with small field-of-view (FOV), such as a microscope objective, where the dominant aberration is axial chromatic. Spherical aberration can be corrected in this case by using an appropriate phase profile. While sphero-chromatic aberration will exist in such a lens (i.e. the spherical aberration will be perfectly corrected only at the design wavelength), this effect is negligible compared to the uncorrected axial chromatic aberration.

For systems with large field of view, it is necessary to take the off-axis aberrations into account. Section 6 extends the analysis to cover this important case.

## 2. Metalens resolution

The chromatic aberration of a conventional metalens is identical to that of a diffractive lens, and is given by Eq. (1) [23].

where λ, and *f* are the nominal (central) wavelength and focal length, respectively, of the imaging system. *Δλ* is the wavelength band over which the system operates, and *Δf* is the corresponding change in focal length, i.e. the longitudinal chromatic aberration for an object at infinity (or beyond the hyperfocal distance).

By “conventional” metalens we mean a metasurface element designed to convert one spherical wavefront to another, by implementing a phase function whose Fresnel zone border locations (where an approximately 2π phase jump is implemented) are independent of wavelength. The exact nano-structure used to implement the phase shift (e.g. geometrical phase based nano-fins [6], truncated waveguide based nano-pillars [5], Huygens resonators [7]) has no influence on the chromatic aberration. Achromatic metalenses, on the other hand, attempt to achieve different zone border locations for different wavelengths, by using highly dispersive nan-antenna structures [19,21]. Thus, Eq. (1) will not apply to them.

The transverse axial chromatic aberration (TAC) is then given by Eq. (2), based on the geometry shown in Fig. 1.

Where *u’* is the marginal ray angle in the image domain, (i.e. the exit angle of the ray depicted in green in Fig. 1, which is a nominal wavelength ray, incident parallel to the optical axis, and passing through the edge of the aperture [24]) and *D* is the lens aperture diameter. Note that difference in exit angle between the green and blue ray, while greatly exaggerated in Fig. 1, is actually quite small, so that the angle of the blue ray can also be approximated as *u’*. The final equality in Eq. (2) arises from substitution of *Δf* from Eq. (1). The geometrical point-spread-function (PSF), for a single wavelength, shifted by *Δλ* from the nominal value, will be a top-hat profile with radius equal to this transverse aberration (assuming uniform illumination at the pupil, which is generally the case in imaging applications).

As a case study, we use the metalens demonstrated by Khorasaninejad et al. [13]. The lens parameters are as follows: *f* = 0.725mm, *D* = 2mm, *λ* = 530nm. This lens was experimentally shown to resolve a 3-bar target with a period of 4.4µm, over a waveband of 100nm. Using these metalens parameters in Eq. (2), with Δλ = 50nm (since we place the image plane at the nominal wavelength focus) we obtain a transverse chromatic spot radius of 94µm. How, then, is it possible to resolve a target with a 4.4µm period? The answer to this question will become clear by the end of this section.

In order to calculate metalens resolution, we must consider not only chromatic aberration, but also the effects of diffraction (as mentioned in section 1, we assume other aberrations to be negligible for small FOV). These effects were simulated in Matlab, based on Fourier analysis of linear optical imaging systems [25,26].

For each spectral range, we determined the number of wavelengths necessary for proper spectral sampling. This was determined empirically, based on convergence testing. The wavelength spacing ranged from 0.2nm at the low end (for spectral range <1nm) to 1nm spacing at the high end (for spectral range 100nm). Gaussian weighting of the wavelengths was used in our simulation (the wavelength range, *Δλ,* is measured between the 1/*e ^{2}* points). When applied to a real system, a weighting corresponding to the actual spectral response may be used.

The top-hat shaped chromatic PSF for each wavelength was calculated based on Eq. (2), and these PSFs were summed, to obtain the total chromatic PSF for a given wavelength range. The total chromatic PSF was then summed in one direction, to obtain the geometrical line-spread-function (LSF). The geometrical LSF was then convolved with the diffraction LSF (Airy pattern), to obtain the physical LSF. Prior to convolution, the LSFs were normalized so that the area under their graph was equal to 1 (in mm units).

Following this, a one-dimensional Fourier transform of the physical LSF was performed, resulting in the optical transfer function (OTF). The absolute value was then taken, to obtain the modulation transfer function (MTF), which is the standard measure of lens resolution [27].

It is important to note, that while polychromatic diffractive lens and metalens MTFs have been calculated for specific cases in the past using commercial optical design software [17,28], they have not been calculated, to the best of our knowledge, for such a large spectral range (100nm centered on 550nm) and low F# (0.5). In such a case, a large number of wavelengths is needed to accurately simulate MTF performance, which is not supported by most commercial optical design software. That is why we performed our analysis in Matlab, using the algorithm described above (we did compare our results to results obtained with the commercial optical design software Zemax, for cases in which less than 24 wavelength were needed for accurate simulation, and found good agreement).

An example of calculated LSFs, for a lens of focal length 1mm, aperture diameter 1mm, and spectral range of 10nm centered at 550nm, is given in Fig. 2.

For the metalens of [13], discussed at the end of section 2, the simulated LSF is shown in Fig. 3, and the MTF, relative to the diffraction limit, is shown in Fig. 4.

In Fig. 3 we see that while the LSF reaches zero only at a radial distance of 94µm, as calculated in section 2, it has a very sharp central peak, as a result of the wavelengths near the nominal value (a similar type of LSF is encountered in a lens with high order spherical aberration). Through the Fourier transform relation, the broad chromatic spot translates into the sharp peak of the MTF near zero spatial frequency, giving rise to a sharp drop in modulation of all but the lowest frequencies. The sharp central LSF peak translates into the long ‘tail’ of the MTF (See Fig. 4), giving rise to low, but existent, modulation, at relatively high frequencies.

The visual effect of such an LSF/MTF is similar to ‘veiling glare’, that is, a generally ‘washed out’ low-contrast image, but still allowing details to be resolved [29]. The advantage of such an MTF, which although being low, does not reach zero until the diffraction limit cutoff, is that it can be enhanced using PSF deconvolution [30]. This is true in particular for our case, where the PSF is known.

Going back to the example of [13], a period of 4.4µm is equivalent to a frequency of 227c/mm (much lower than the cutoff frequency of 3000 c/mm), where the MTF value, according to Fig. 4, is approximately 4%. Therefore, this target can be resolved with the help of simple contrast enhancement (e.g. histogram stretching).

## 3. Metalens SNR

Given the scene radiance at the central wavelength (we take the scene radiance to be constant over the spectral range Δλ, since Δλ is not expected to be large), the irradiation at the image plane is given by Eq. (3) [31].

Where *τ* is the system transmission, *R _{λ}* is the spectral radiance (W/(sr·m

^{2}·µm)), and

*Δλ*is the spectral range. To simplify our analysis, we assume

*τ*= 1 (negligible reflection, absorption and scattering to high diffraction orders), but in a real application, the appropriate

*τ(λ)*may be used. We also assume a lambertian scene, with unity reflectance, so that

*πR*[31], where

_{λ}= E_{λ}*E*is the scene spectral irradiance (a more realistic value for reflectance would be ∼0.2, but for convenience we bundled this into the scene irradiance). In our simulation we introduced an additional factor of 0.6 into the expression of

_{λ}*E*, to account for the Gaussian weighting of the illumination over

_{i}*Δλ.*

The camera signal in units of number of electrons is then given by Eq. (4) [32].

Where we have multiplied *E _{i}* by

*A*, the pixel area, and

*t*, the camera integration time, to obtain the optical flux [W] impinging on the pixel. We then divided by the photon energy

*hc/λ*to obtain the number of photons, and finally multiplied by

*η,*the detector quantum efficiency, to obtain the signal, in units of number of electrons.

The noise in a CCD or CMOS imager can be roughly divided into shot noise and dark noise. The shot noise follows a Poisson distribution with standard deviation equal to$\sqrt{S}$, i.e. the square root of the number of signal electrons. For large enough *S* (>10) the Poisson distribution can be approximated as a Gaussian distribution [33]. The spatial spectral distribution of the shot noise is flat, since the noise in each pixel is independent of its neighbors, i.e. it is a “white noise”.

The dark noise is independent of illumination, but increases with integration time [34]. Therefore, the shot noise is dominant for high signals *(*$\sqrt{S}>>N$, where *N* is the dark noise in electrons), while the dark noise is dominant for low signals.

For our analysis we assume the dominant noise to be shot noise. We will re-validate this assumption in section 5, based on the calculated number of photons in our test case. However, since in modern imagers the dark noise is generally low (on the order of 10 electrons), a situation in which we are not shot noise limited means we have very low signal, and is not desirable. Therefore, in most practical cases we will be shot noise limited. If necessary, however, dark noise can easily be added to the model, as relevant for the specific sensor used. Based on the above assumptions, and on Eqs. (3) and 4, the SNR can be described by Eq. (5).

## 4. Optimization method

In order to optimize a metalens system, we must begin with the system requirements. We assume that the image sensor to be used is given (format and pixel size), as is the required field of view. We assume also that the center wavelength is given, based on the system requirements and the choice of imager. Metalens focal length is then determined according to Eq. (6).

Where *θ* is the required half-field-of-view (FOV), and *h* is the corresponding imager format dimension (i.e. if *θ* is the diagonal half-FOV, then *h* is half the imager diagonal). The reason a sine function is used rather than a tangent function, is that for a wide FOV metalens, corrected for off-axis aberrations (i.e. stop at front focal plane, see section 6), the geometrical distortion will have a sine function dependency [28]. For a small FOV, *tanθ* can still be used, since for small angles *sinθ≈tanθ*.

To summarize, the system requirements determine the focal length and central wavelength. Our degrees of freedom in the design are the choice of aperture and spectral range. From Eq. (2) it can be seen that the amount of chromatic aberration increases in proportion to the spectral range *Δλ*, and the lens aperture *D*, i.e. the resolution decreases in proportion to these parameters. On the other hand, from Eq. (5) we can see that the SNR increases in proportion to $\sqrt{\Delta \lambda}$ and *D*. It is therefore evident that we have a trade-off between system resolution and sensitivity. We would like to find the optimum spectral range and aperture for the system, based on this trade off.

We would now like to define an appropriate merit function for optimizing the performance of the metalens system. In infrared imaging systems, the accepted figure of merit is minimum resolvable temperature difference (MRTD) [35,36]. The advantage of MRTD is that it combines resolution and noise into a single figure of merit. Officially the MRTD is subjective, but it can be calculated according to Eq. (7).

Where NETD is the noise equivalent temperature difference (i.e. noise in units of temperature in the object domain), and *K* is a proportionality coefficient determined by the SNR required by the observer in order to resolve the target. An equivalent figure of merit, known as minimum resolvable contrast (MRC), has been defined for the visible spectrum [37].

In our case too, the system performance depends on resolution and noise, so we can propose a similar figure of merit, that we will call ‘Spectral SNR’ (SSNR – note that this term has been used with different meaning in the field of cryo-electron microscopy). The SSNR is defined in Eq. (8) as the zero-frequency SNR of Eq. (5), multiplied by the MTF (by zero-frequency SNR we mean the SNR in the case of a zero spatial frequency signal, i.e. the SNR in a blank image of constant signal level) . Since the MTF gives us the signal attenuation at each frequency, multiplying it by the SNR will give us the SNR at each spatial frequency (the noise level is independent of frequency, as explained in section 3). The SSNR is equivalent to the inverse of the MRTD, if the constant *K* is taken as *1/S* (*S* = signal) – the advantage being that the SSNR is better when it is larger, as opposed to MRTD which is better when it is smaller.

In any optimization task it is necessary to obtain a single number, that represents all relevant performance parameters, for use as the system merit function. In our case, this can be achieved by taking the area under the SSNR function, to obtain the average SNR (ASNR) over all relevant spatial frequencies. The relevant frequencies for a particular imaging system are from zero, up to the imager Nyquist frequency, as described by Eq. (9).

## 5. Simulation results

In order to demonstrate the optimization method, we optimize a generic metalens system, with the following parameters: Central wavelength of 550nm, pixel size of 1µm, and focal length of 1mm. The lens is modeled simply as a spherical phase function, modulo 2π, so the analysis is relevant no matter how the phase is implemented (diffractive surface, or meta-surface of any type). For this system, the MTF for various wave-bands, at an aperture of F/1 (this means *F = 1,* where *F≡f/D*), is shown in Fig. 5 (the numbers in the legend are *Δλ* in units of nm), and the matching SSNRs are shown in Fig. 6.

The absolute value of the SSNRs, shown in Fig. 6, are for integration time t = 4ms, and spectral irradiation *E _{λ} = πR_{λ} = 1.4 W/(m^{2}·µm)*. This spectral irradiation corresponds to 0.1% of peak sunlight irradiation at 550nm, as given in [26]. These values were purposely chosen to give low signal, so that the noise will be noticeable, for the imaging simulation shown later on in Fig. 8.

The optimization routine scans over a set of apertures (*F*) and wave-bands (*Δλ*), to locate the optimum values. The results for the ASNR are shown in Fig. 7, where the x-axis is the spectral range *Δλ*, and the y-axis is the aperture given in terms of *F#*. It can be seen that in principle it is better to use as low an F# as possible. However, our ability to do this will be limited by optical design or nano-structure function considerations. Figure 7 will therefore serve best to determine the optimal bandwidth Δλ for a pre-determined F#, based on the dashed line, which marks the locus of the maximum ASNR for each row (i.e. each F#).

It can be seen from Fig. 7, that for the above generic system, operating at F/1, the optimum spectral band is 5.6nm.

The absolute values of the SSNR and the ASNR depend on scene illumination conditions, and on camera integration time. However, this dependency simply introduces a multiplicative constant into the expressions for the SSNR and ASNR, which is the same for all values of *Δλ* and *D*. Therefore, the shape of the graphs and the optimization results will not be effected. The pixel area will slightly effect the optimization results, via the Nyquist frequency in the upper integral limit of Eq. (9). In our generic system, the pixel size was chosen as 1µm, so the Nyquist frequency is 500 c/mm.

Having performed a numerical analysis, we would now like to visualize, and thus verify, the results, by performing an imaging simulation. The images in Fig. 8 are based on computer simulation. They show the imaging performance for our generic system, which has an aperture of F/1, at three spectral widths. The images were produced in Matlab by convolving the metalens PSF with the original image, and adding Gaussian shot noise, whose level was calculated according to Eq. (5). The top row shows the results, which are in full agreement with Fig. 7. Figure 8(b) presents the best results, at spectral width 5nm. At 1nm spectral width (Fig. 8(a)) the image quality is noise limited, whereas at 50nm (Fig. 8(c)), it is resolution limited.

The bottom row of images shows the results of image reconstruction in Matlab via a Wiener filter [38]. It can be seen that the best quality image is still obtained at 5nm spectral width. However, the quality of all the images is much improved. While the Wiener filter did an excellent job in reducing the noise at the 1nm spectral width (bottom-right), in a real case it is likely that it will not perform as well, since the noise level, used as an input to the Wiener filter, will not be known accurately.

We can now revisit our assumption regarding the system being shot noise limited. As a reasonable value for dark noise we will use 10 electrons per pixel. Therefore, if we have a signal of 100 electrons, the shot noise will be equal to the dark noise, and from approx. 1000 electrons of signal (31 electrons of shot noise) and upwards we can assume a shot noise limited system. It should be noted, that the region in which we are shot noise limited is approximately congruent with the region in which we have reasonable SNR (of at least 1000/(31 + 10)≈24).

In Fig. 9 we show the number of electrons per pixel as a function of aperture and spectral range, for integration time of 4ms, and spectral irradiance of 1.4 W/(m^{2}·µm) – the same parameters that were used in Figs. 6, 7 and 8. It can be seen that at F/1 and spectral width 5.6nm, we have just above 10 electrons per pixel per integration time – similar to our above dark noise estimate. This is clearly not a shot noise limited situation. However, as previously mentioned, we purposely used very low illumination conditions in our simulation, corresponding to a ‘very dark day’ [39], so the noise will be visible in Fig. 8. With illumination two orders of magnitude higher, corresponding to ‘full daylight’, we will have 1000 electrons per pixel per integration time, thus becoming shot nose limited. For the intermediate case, of an ‘overcast day’, we can still get 1000 electrons if we increase integration time to 40ms (which still allows a real-time video rate of 25 frames per second). Conversely, if we use a high-quality cooled imager, we can reduce the dark noise to ~1 electron, in which case even the low-light conditions used in the simulation will give shot noise limited performance.

To summarize, we ask the following question: Can a reasonable SNR indeed be achieved by a metalens system, even though only a small portion of the spectrum is used for signal? Based on the previous paragraphs, it should be possible to obtain reasonable SNR in outdoor illumination. Indeed, we have neglected several factors in our analysis (metalens transmission and scattering, scene reflectance, quantum efficiency), so a specific and detailed analysis will be required for any practical application. With indoor illumination, the situation is generally worse, but can be improved by use of strong artificial illumination or a flash.

## 6. Large field-of-view

In the case of a metalens with a large field-of-view, off-axis aberrations come into play. In the case of a system operating over a very narrow spectral band, a good solution for these aberrations can be obtained by placing the system aperture stop at the front focal plane of the metalens [28]. This nullifies the coma and astigmatism, in addition to field curvature which is zero for any diffractive lens.

This solution was implemented in [17], with some added complexity, since the combination of large aperture and large field-of-view required correction of the spherical aberration at the stop, and not at the lens surface [28]. This was accomplished by placing an additional metalens at the stop (on the opposite side of the same substrate). In the following, we will assume that the spherical aberration is corrected (e.g. using an additional metalens at the stop, or if the aperture is small enough – using the phase function of the original metalens).

What happens when one wants to operate over a finite spectral band? Although quasi-monochromatic operation is discussed briefly by Buralli and Morris [28], they refer only to on-axis chromatic aberration. In fact, when considering wide-field operation over a spectral band, the lateral chromatic aberration must also be taken into account. As we shall see, for the case of aperture stop at the lens surface, the lateral chromatic aberration is zero, but not so for stop at front focal plane. This raises the question if, for a finite spectral band, the optimal stop location is still at the front focal plane, or perhaps in this case it should be moved toward the lens.

The lateral chromatic aberration of a thin lens is given by Eq. (10) [24].

Where *TAC* is the transverse axial chromatic aberration given by Eq. (2), *y _{p}* is the chief ray height at the lens, and

*y*is marginal ray height at the lens – in our case equal to the stop aperture radius, as shown in Fig. 10.

If the stop is removed from the lens by a distance *s,* we have ${y}_{p}=s\cdot \mathrm{tan}\theta $, where *θ* is the field-of-view. By substituting this, and Eq. (2) (with *D/2 = y*), into Eq. (10), we obtain Eq. (11).

If the stop is placed at the front focal plane of the lens, we obtain Eq. (12).

According to [28], the relevant off-axis monochromatic aberrations are coma and astigmatism. For the case of a planar phase element, their transverse magnitudes are given by Eqs. (13) and 14 respectively [28,40].

From Eqs. (11), 13 and 14 it can be seen that moving the stop from the lens towards the front focal plane will result in decrease of coma and astigmatism, but increase in lateral chromatic aberration. It is, therefore, clear that the optimal location will be somewhere in between.

In order to get an idea of where the optimum location will be, let us look at the order of magnitude of the off-axis aberrations. Since we are discussing a wide-FOV system, we can assume *tanθ*∼1 (*θ*∼*45°*). The apertures we are dealing with are large, so we can also assume *y/f*∼1. We then obtain the following: For stop at the lens, the transverse coma and astigmatism will be on the order of *f* (the focal length), while the lateral chromatic is zero. For stop at front focal plane, the coma and astigmatism will be zero, while the transverse lateral chromatic will be on the order of *f·Δλ/λ*.

Based on the analysis results of Fig. 7, it is clear that for reasonably large apertures (F<2), necessary to obtain enough light, we will be working at *Δλ/λ<<*1. Therefore, the dominant aberrations are the monochromatic ones, and the optimum stop location will be very close to the front focal plane. In this case the coma and astigmatism are canceled, and we can perform the optimization based on the method presented in section 4, with a minor modification: The effect of the lateral chromatic aberration is now added, by introducing appropriate translation of the chromatic top-hat PSFs before summing to obtain the total PSF. According to Eqs. (2) and (8), the orders of magnitude of the lateral chromatic blur and the axial chromatic blur are similar. Therefore, the results are not expected to differ much from those presented in Fig. 7, for the on-axis case. We performed this simulation for a single field point at 30° FOV, and indeed obtained very similar results.

The possibility of image reconstruction via deconvolution is also more complex when off-axis aberrations are dominant, since the deconvolution kernel (physical PSF) must vary over the field-of-view [41,42]. However, for the case of stop at front focal plane, where the dominant off-axis aberration is lateral chromatic, there is a relatively simple solution: The effect of lateral chromatic aberration is identical to radial motion blur (caused by the camera moving along the line of sight). In such a case, the deconvolution can be performed with a non-varying kernel. This is done by converting the image to polar coordinates (r,θ). The de-convolution is then done with a one-dimensional rectangular pulse shaped kernel along the r axis, following which the image is reverted back to Cartesian coordinates. Other methods are also available [43].

## 7. Conclusions

In this paper we set up a metric for evaluation of metalens performance, allowing fair comparison of novel metalens technologies, such as achromatic metalenses, in terms of optical performance. Based on this metric we show that even at the current state of the art, without correction of chromatic aberration, it may be possible to use metalenses in some broadband spectrum applications (e.g. white light microscopy, daytime outdoor imaging, security camera with infrared LED illumination). However, this needs to be done while giving careful consideration to system requirements, and optimizing the system parameters, particularly the spectral range and aperture, for the application. Following optimization, image processing methods may allow additional improvement of image quality.

In order to implement the optimal spectral band, based on the analysis described in this paper, it is necessary to limit the waveband over which the metalens operates. How can this be done? We suggest three possibilities: (a) Band pass filter (b) Tailoring of the spectral response of the metalens nano-antennas [44]. (c) Tailoring of the spectral response of the sensor [45].

The simplest solution for a single waveband system is use of an external band-pass filter. Such filters exist as catalog items, and can be mounted before the metalens. However, this solution is bulky, and will not work for a multi-waveband system. A less bulky solution for a single-waveband system is to place the filter near the image plane, possibly integrated with the sensor, but then its’ thickness must be accounted for in the design of the metalens, as it introduces aberrations.

For the case of a multi-waveband system, such as an RGB color imaging system, the optimization method described in this paper can be applied to each of the wavebands. However, at the implementation stage, there are two added challenges: (a) Performing separate focus correction for each waveband. (b) Fusion of the images from each of the wavebands into a single image.

It has been suggested that three parallel imaging systems can be used to obtain R, G and B signals [17]. In such a case one can easily perform separate focus correction, but fusion of the images may be difficult, due to parallax, since the perspective seen through each of the systems will be slightly different. This could be resolved using image registration methods, and even be used to advantage, to produce a 3D image. However, real-time frame rate may be limited as a result of processing overhead.

Use of tailored spectral response of the metalens/sensor itself for the case of a multi-band system, is demonstrated in [44,46] respectively, where a 3-layer metalens/sensor is used to provide R, G and B images, all in focus simultaneously. The optimization method presented here can be used to evaluate such designs, and determine the optimal spectral response for each of the layers.

The method presented can be applied not only to “conventional” metalenses, as done in this paper, but also to chromatically corrected metalenses, taking into account their MTF and transmission/scatter (scatter of light into spurious diffraction orders will add background illumination, with contribution to shot noise, but no contribution to the signal). In this way, the performance of different metalens-based imaging technologies can be compared.

## References and links

**1. **P. Lalanne, S. Astilean, P. Chavel, E. Cambril, and H. Launois, “Design and fabrication of blazed binary diffractive elements with sampling periods smaller than the structural cutoff,” J. Opt. Soc. Am. A **16**(5), 1143 (1999). [CrossRef]

**2. **P. Lalanne, J. P. Hugonin, and P. Chavel, “Optical properties of deep lamellar gratings: A coupled bloch-mode insight,” J. Lightwave Technol. **24**(6), 2442–2449 (2006). [CrossRef]

**3. **J. Tervo, V. Kettunen, M. Honkanen, and J. Turunen, “Design of space-variant diffractive polarization elements,” J. Opt. Soc. Am. A **20**(2), 282–289 (2003). [CrossRef] [PubMed]

**4. **U. Levy, C.-H. Tsai, L. Pang, and Y. Fainman, “Engineering space-variant inhomogeneous media for polarization control,” Opt. Lett. **29**(15), 1718–1720 (2004). [CrossRef] [PubMed]

**5. **Y. F. Yu, A. Y. Zhu, R. Paniagua-Dominguez, Y. H. Fu, B. Luk’yanchuk, and A. I. Kuznetsov, “High-transmission dielectric metasurface with 2pi phase control at visible wavelengths,” Laser Photonics Rev. **9**(4), 412–418 (2015). [CrossRef]

**6. **Z. Bomzon, V. Kleiner, and E. Hasman, “Pancharatnam--Berry phase in space-variant polarization-state manipulations with subwavelength gratings,” Opt. Lett. **26**(18), 1424–1426 (2001). [CrossRef] [PubMed]

**7. **A. I. Kuznetsov, A. E. Miroshnichenko, M. L. Brongersma, Y. S. Kivshar, and B. Luk’yanchuk, “Optically resonant dielectric nanostructures,” Science **354**(6314), 2472 (2016). [CrossRef] [PubMed]

**8. **J. S. Clausen, E. Hojlund-Nielsen, A. B. Christiansen, S. Yazdi, M. Grajower, H. Taha, U. Levy, A. Kristensen, and N. A. Mortensen, “Plasmonic metasurfaces for coloration of plastic consumer products,” Nano Lett. **14**(8), 4499–4504 (2014). [CrossRef] [PubMed]

**9. **U. Levy, H. C. Kim, C. H. Tsai, and Y. Fainman, “Near-infrared demonstration of computer-generated holograms implemented by using subwavelength gratings with space-variant orientation,” Opt. Lett. **30**(16), 2089–2091 (2005). [CrossRef] [PubMed]

**10. **G. M. Lerman and U. Levy, “Generation of a radially polarized light beam using space-variant subwavelength gratings at 1064 nm,” Opt. Lett. **33**(23), 2782–2784 (2008). [CrossRef] [PubMed]

**11. **B. Desiatov, N. Mazurski, Y. Fainman, and U. Levy, “Polarization selective beam shaping using nanoscale dielectric metasurfaces,” Opt. Express **23**(17), 22611–22618 (2015). [CrossRef] [PubMed]

**12. **J. Bar-David, L. Stern, and U. Levy, “Dynamic Control over the Optical Transmission of Nanoscale Dielectric Metasurface by Alkali Vapors,” Nano Lett. **17**(2), 1127–1131 (2017). [CrossRef] [PubMed]

**13. **M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science (80-.). **352**, 1190–1194 (2016).

**14. **M. Khorasaninejad, A. Y. Zhu, C. Roques-Carmes, W. T. Chen, J. Oh, I. Mishra, R. C. Devlin, and F. Capasso, “Polarization-Insensitive Metalenses at Visible Wavelengths,” Nano Lett. **16**(11), 7229–7234 (2016). [CrossRef] [PubMed]

**15. **A. Arbabi, Y. Horie, A. J. Ball, M. Bagheri, and A. Faraon, “Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays,” Nat. Commun. **6**, 7069 (2015). [CrossRef] [PubMed]

**16. **P. Lalanne and P. Chavel, “Metalenses at visible wavelengths : past, present, perspectives,” Laser Photonics Rev. **11**(3), 1600295 (2017). [CrossRef]

**17. **A. Arbabi, E. Arbabi, S. M. Kamali, Y. Horie, S. Han, and A. Faraon, “Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations,” Nat. Commun. **7**, 13682 (2016). [CrossRef] [PubMed]

**18. **F. Aieta, M. A. Kats, P. Genevet, and F. Capasso, “Multiwavelength achromatic metasurfaces by dispersive phase compensation,” Science **347**(6228), 1342–1345 (2015). [CrossRef] [PubMed]

**19. **E. Arbabi, A. Arbabi, S. M. Kamali, Y. Horie, and A. Faraon, “Multiwavelength polarization-insensitive lenses based on dielectric metasurfaces with meta-molecules,” Optica **3**(6), 628 (2016). [CrossRef]

**20. **M. Khorasaninejad, F. Aieta, P. Kanhaiya, M. A. Kats, P. Genevet, D. Rousso, and F. Capasso, “Achromatic Metasurface Lens at Telecommunication Wavelengths,” Nano Lett. **15**(8), 5358–5362 (2015). [CrossRef] [PubMed]

**21. **M. Khorasaninejad, Z. Shi, A. Y. Zhu, W. T. Chen, V. Sanjeev, A. Zaidi, and F. Capasso, “Achromatic Metalens over 60 nm Bandwidth in the Visible and Metalens with Reverse Chromatic Dispersion,” Nano Lett. **17**(3), 1819–1824 (2017). [CrossRef] [PubMed]

**22. **E. Arbabi, A. Arbabi, S. M. Kamali, Y. Horie, and A. Faraon, “Controlling the sign of chromatic dispersion in diffractive optics,” Optica **4**(6), 625–632 (2017). [CrossRef]

**23. **D. D. O’Shea, T. J. Suleski, A. D. Kathman, and D. W. Praather, *Diffractive Optics* (SPIE Press, 2003).

**24. **W. J. Smith, *Modern Optical Engineering*, 3rd ed. (McGraw-Hill, 2000).

**25. **J. W. Goodman, *Introduction to Fourier Optics*, 2nd ed. (McGraw-Hill, 1996).

**26. **L. Levi, *Applied Optics Vol. 1* (Wiley, 1966).

**27. **G. D. Boreman, *Modulation Transfer Function in Optical and Electro-Optical Systems* (SPIE Press, 2001).

**28. **D. A. Buralli and G. M. Morris, “Design of a wide field diffractive landscape lens,” Appl. Opt. **28**(18), 3950–3959 (1989). [CrossRef] [PubMed]

**29. **I. Tomić and I. Karlović, “Practical assessment of veiling glare in camera lens system,” J. Graph. Eng. Des. **5**, 23–28 (2014).

**30. **E. Talvala, A. Adams, M. Horowitz, and M. Levoy, “Veiling glare in high dynamic range imaging,” ACM Trans. Graph. **26**(3), 37 (2007). [CrossRef]

**31. **J. M. Palmer and B. G. Grant, *The Art of Radiometry* (SPIE, 2010).

**32. **J. Wilson and J. Hawkes, *Optoelectronics: An Introduction* (Prentice Hall, 1998).

**33. **E. Parzen, *Modern Probability Theory and Its Applications* (Wiley, 1960).

**34. **G. C. Holst and T. S. Lomheim, *CMOS/CCD Sensors and Camera Systems* (SPIE Press, 2011).

**35. **G. C. Holst, *Testing and Evaluation of Imaging Infrared Systems* (SPIE, 2008).

**36. **G. C. Holst, *Electro-Optical Imaging System Performance*, 5th Ed. (SPIE, 2008).

**37. **Y. Zhang, J. Dai, and W. Li, “Measurement of minimum resolvable contrast based on human visual property,” in *International Symposium on Instrumentation Science and Technology*, J. Tan and X. Wen, eds. (2008), Vol. 7133, p. 71333V. [CrossRef]

**38. **R. C. González and R. E. Woods, “Digital Image Processing 2nd Edition,” in *Prentice Hall* (2002).

**39. ***Electro-Optics Handbook* (Burle Industries Inc., 1974).

**40. **J. C. Wyant and K. Creath, *Basic Wavefront Aberration Theory for Optical Metrology in Applied Optics and Optical Engineering Vol. XI* (Academic Press, 1992).

**41. **J. G. Nagy, V. P. Pauca, R. J. Plemmons, and T. C. Torgersen, “Space-varying restoration of optical images,” J. Opt. Soc. Am. A **14**(12), 3162–3174 (1997). [CrossRef]

**42. **H. Cheong, E. Chae, E. Lee, G. Jo, and J. Paik, “Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor,” Sensors (Basel) **15**(1), 880–898 (2015). [CrossRef] [PubMed]

**43. **H. Hong and T. Zhang, “Fast restoration approach for rotational motion blurred image based on deconvolution along the blurring paths,” Opt. Eng. **42**(12), 3471–3486 (2003). [CrossRef]

**44. **O. Avayu, E. Almeida, Y. Prior, and T. Ellenbogen, “Composite Functional Metasurfaces for Multispectral Achromatic Optics,” Nat. Commun. **8**, 14992 (2017). [CrossRef] [PubMed]

**45. **Q. Lin, A. Armin, P. L. Burn, and P. Meredith, “Filterless narrowband visible photodetectors,” Nat. Photonics **9**(10), 687–694 (2015). [CrossRef]

**46. **P. M. Hubel, “Foveon Technology and the Changing Landscape of Digital Cameras,” in Proc. Thirteen. IS&T Color Imaging Conf. p. 314–317 (2005).