Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Resolution limit in opto-digital systems revisited

Open Access Open Access

Abstract

The resolution limit achievable with an optical system is a fundamental piece of information when characterizing its performance, mainly in case of microscopy imaging. Usually this information is given in the form of a distance, often expressed in microns, or in the form of a cutoff spatial frequency, often expressed in line pairs per mm. In modern imaging systems, where the final image is collected by pixelated digital cameras, the resolution limit is determined by the performance of both, the optical systems and the digital sensor. Usually, one of these factors is considered to be prevalent over the other for estimating the spatial resolution, leading to the global performance of the imaging system ruled by either the classical Abbe resolution limit, based on physical diffraction, or by the Nyquist resolution limit, based on the digital sensor features. This estimation fails significantly to predict the global performance of opto-digital imaging systems, like 3D microscopes, where none of the factors is negligible. In that case, which indeed is the most common, neither the Abbe formula nor the Nyquist formula provide by themselves a reliable prediction for the resolution limit. This is a serious drawback since systems designers often use those formulae as design input parameters. Aiming to overcome this lack, a simple mathematical expression obtained by finely articulating the Abbe and Nyquist formulas, to easily predict the spatial resolution limit of opto-digital imaging systems, is proposed here. The derived expression is tested experimentally, and shows to be valid in a broad range of opto-digital combinations.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging systems are classified in terms of their optical features. Field of view (FOV), depth of field (DOF), magnification and spatial resolution, are mainly considered at the time of ranking imaging systems of the same type. Among these features, the spatial resolution limit, understood as the capability of the imaging system to distinguish the images of two adjacent object points, is perhaps the most studied figure in any type of imaging system since the very onset of the optical sciences.

This interest has been kept very active until today, embracing the evolution demanded by the steps forward in the imaging technology. For instance, in modern imaging systems, where the irradiance detectors are discretized digital cameras, the study of the spatial resolution must consider the performance of both, the optical system and the digital sensor. The optical performance is determined by wave diffraction, through the shape and size of the diffractive point-spread funtion (PSF) [1]. The digital performance is mainly determined by the pixel size and spacing through the Nyquist limit [2]. It is usually assumed that the spatial resolution performance of an opto-digital imaging system is ruled only by the dominant effect between the optical and the digital ones. Thus only wave optics or pixel size effects are alternatively considered, led by existing criteria associated exclusively with diffraction or with pixelation [35].

To give an example, in digital photography pixels are commonly larger than the diameter of the first lobe of the diffractive PSF. Consequently, pixel size is usually considered to be the limiting factor in terms of spatial resolution. Note however that even in the case of significant difference between Nyquist and PSF sizes, the wave-optics influence cannot be entirely discarded. This is more evident in modern digital photography where sensors used are progressively diminishing their pixel size.

On the contrary, in conventional microscopy the diffractive PSF tends to be properly sampled by the pixels, so diffraction is typically considered to be the limiting factor. Also here the decision of discarding one effect (Nyquist in this case) can lead to an overestimation of the resolution power. Furthermore, in optical microscopes there are many cases in which the PSF and the pixel size of the camera are comparable. For instance, in microscopy techniques working in low-photon regime the pixel size of the sensor must be compromised in order to increase the number of photons detected by each pixel, as occurs in single molecule localization microscopy (SMLM) based techniques [68]. In the same way, any microscope with low magnification relative to its numerical aperture possesses a diffractive PSF size close to or even below the Nyquist sampling requirement. That is the case of Fourier Lightfield Microscopy (FLMic) [911], in which the combined effect of dividing the microscope aperture in smaller parts by a lenslet array and reducing the focal length of the tube lens results in a diffraction spot of dimensions comparable with the pixel size. Even more, in scanning imaging systems, such as confocal laser scanning microscopes (CLSM) [1214], or stimulated emission depletion microscopy (STED) [15], the resolution of the final image can depend on the capturing conditions. In these scanning systems, the size of the pixel is chosen by the user when selecting the scanning step. This decision is often taken by choosing an scanning step that matches the Nyquist sampling criterion of the PSF of the system [16], which, slightly in CLSM and highly in STED, is smaller than the widefield PSF. In such cases in which diffraction and pixelation limits are close to each other, both effects must be taken into account to provide a proper resolution limit of the optics of the imaging system, as it has been pointed out in the past by theoretical analysis performed in the frequency domain [17]. Naturally, new optical approaches that do not require lenses may remedy issues with lens PSF [1820].

When projecting the practical implementation of an imaging system, usually by using off-the-shelf components, researchers or system designers need to have easy formulas for the preliminary evaluation of its optical features. Concerning the spatial resolution, in most cases neither the diffraction nor the pixelation effects are negligible. Then it is necessary to define a figure that considers the combined action of both effects. Aimed to provide insight in this regard, in this work an analysis of the combined performance of the physical diffraction and the digital camera, as for both having different or even weight on the global spatial resolution performance, is presented. The analysis leads to a simple mathematical expression that finely articulates the well-known Abbe’s and Nyquist’s resolution limits [3,5] to forecast the global spatial-resolution performance of an opto-digital imaging system. The utility of proposed formula is contrasted experimentally.

2. PSF of opto-digital imaging systems

The spatial resolution of a modern imaging system is characterized by the performance of the global setup, including the physical diffraction and the sensor features. Assuming that the global system is, in good approximation, linear and shift invariant (LSI), that performance is analyzed mathematically through the global and the individual PSFs, represented here by functions $h(x,y)$, which describe the response of the system to a point object [1]. Note that in LSI systems the response to a laterally-displaced point object is also shifted while preserving its shape [1,21]. Given that in an opto-digital imaging system the sensor collects the image provided by the optics of the system, and accepting that the sensor response can be characterized through a sensor PSF, one can write

$$I(x,y)=\left[ \frac{1}{\left|{M}\right|^2}O\left(\frac{x}{M},\frac{y}{M}\right)\otimes h_\mathrm{diff}(x,y) \right] \otimes h_\mathrm{sens}(x,y),$$
where $O(x,y)$ is the object intensity distribution, and $M$ the lateral magnification of the system.

Then, taking profit from the associative property of convolution product, one can define a global PSF as the convolution between two individual PSFs,

$$h_\mathrm{glob}(x,y)=h_\mathrm{diff}(x,y)\otimes h_\mathrm{sens}(x,y).$$

In these terms, the two-dimensional (2D) irradiance distribution of the image captured by the opto-digital imaging system is given by,

$$I(x,y)=\frac{1}{\left|{M}\right|^2}O\left(\frac{x}{M},\frac{y}{M}\right)\otimes h_\mathrm{glob}(x,y),$$

The diffraction PSF should take into account both, the effect of apertures and diffracting screens on the wave propagation, and also the effect of optical aberrations. Despite the fact that the aberrations effects are an important issue, in this study we assume that the design of the optical elements is good enough to compensate largely for the aberrations, so that we can neglect aberrations effects and focus our study on wave propagation. Then the incoherent diffractive PSF is equal to the square modulus of the Fourier transform of the transmittance of the aperture stop (AS) of the system. Typically, the AS of an imaging system has circular shape, so its Fourier transform is the Airy disk function, hence the diffraction PSF, as evaluated at the image plane, is:

$$h_\mathrm{diff}(x,y)=\left|\left(\frac{\phi_{\mathrm{AS}}}{2}\right)^2\mathrm{Disk}\left(r\frac{\mathrm{NA}'}{\lambda}\right)\right|^2,$$
being $r=\sqrt {x^2+y^2}$ the radial coordinate, $\phi _\mathrm {AS}$ the aperture stop diameter, $\mathrm {NA}'=\mathrm {NA} /M$ the numerical aperture in the image space with $\mathrm {NA}$ the equivalent in the object space, $\lambda$ the wavelength and $\mathrm {Disk}(r)=J_1(r)/r$ the Airy Disk function with $J_1(r)$ the Bessel function of the first kind and first order [1,21].

As far as it is concerned to the sensor PSF, a point object should be considered to ideally correspond to a pixel in the sensor plane. The detector footprint PSF is equivalent to a pixel, that is, a rectangle function of width the pixel size, $\Delta _\mathrm {p}$ [2,17]. In addition, the sampling inherent in any detection system due to the discretization of pixels must be considered. In the case of a pixelated digital camera, a sample is taken at each spatial interval equal to the pixel size. This sampling causes the system not to be strictly LSI; the captured image is different if the image of the input point object falls over the center or the edge of a given pixel. Hence, the recorded irradiance distribution is different depending on the position in which the image falls over the sensor. This fact violates the shift-invariance assumption, thus forbidding stricto sensu the use of direct convolution operations. To recover the possibility of using the LSI formalism it is necessary to generalize and define an average sampling PSF [22]. For this purpose, the PSF of the sampling by a square pixel sensor is considered to be a rectangle function whose width is the sampling intervals [17]. In this study we consider the case of contiguous pixels, so these intervals are equal to the pixel width. The sensor PSF is therefore the result of the convolution between the detector footprint PSF and the sampling PSF,

$$h_\mathrm{sens}(x,y)=h_\mathrm{pix}(x,y)\otimes h_\mathrm{sampl}(x,y)=\mathrm{rect}\left(\frac{x}{\Delta_\mathrm{p}},\frac{y}{\Delta_\mathrm{p}}\right) \otimes \mathrm{rect}\left(\frac{x}{\Delta_\mathrm{p}},\frac{y}{\Delta_\mathrm{p}}\right),$$
that is equal to a triangle function of base $2\Delta _\mathrm {p}$ [1].

As shown in Fig. 1, for three different cases, the global PSF is a modified Airy disk, broadened by convolution with the sensor PSF. This broadening is because the convolution of functions of compact support results in a function whose width is equal to the sum of the widths of the original functions. The Fig. 1(a) shows the case in which the diffraction PSF is much wider than the sensor PSF, and therefore the global PSF is very much alike the diffraction one. In contrast, Fig. 1(b) shows a case where the sensor PSF is much wider than the diffraction PSF, giving rise to a global PSF whose size and shape is similar to those of the sensor one. Finally, in Fig. 1(c) we show an intermediate, or hybrid, case in which none effect is dominant, resulting then in a global PSF with shape and size that are comparably influenced by diffraction and sensor features.

 figure: Fig. 1.

Fig. 1. Global PSF of the opto-digital imaging system. We have plotted the diffractive PSF (Airy disk) in dotted-red line, the pixel PSF (rectangle of width $\Delta _\mathrm {p}$) in solid-yellow line, and the sampling PSF (rectangle of width $\Delta _\mathrm {p}$) in dashed-purple line. Aditionally, we have drawn the total PSF of the sensor (triangle of width $2\Delta _\mathrm {p}$) with dashed-green line. Finally, the global PSF of the system is plotted with solid-blue line. (a) Case dominated by diffraction; (b) Case dominated by pixel size; (c) Hybrid case.

Download Full Size | PDF

The lateral resolution of systems governed strictly either by the diffraction or by the pixelated digital camera has been historically studied. There exist well-known formulae for the easy calculation of the resolution limit. In case the resolution is assumed to be limited only by wave diffraction, Abbe’s resolution limit is the key figure [3]. Abbe’s resolution limit evaluates which spatial frequencies are filtered out, as a function of the size and shape of the aperture stop. For a circular AS, its radius provides a certain spatial frequency cut-off, indicating that higher spatial frequencies are not transmitted to the final image, thus limiting the spatial resolution. The equivalent in the spatial domain of this cut-off frequency is the Abbe’s resolution limit [23],

$$r_\mathrm{Abbe}=\frac{\lambda}{2\,\mathrm{NA}}.$$

Conversely, if the effects of diffraction and aberrations are negligible as compared with those of the pixel size, the resolution is determined by the pixel structure of the sensor. According to the Nyquist criterion [5], two image points are resolvable if they are captured by different pixels and there is at least one pixel in between them. In the image space this distance is $2\Delta _\mathrm {p}$, so in the object space the Nyquist resolution limit is

$$r_\mathrm{Nyq}=\frac{2\Delta_\mathrm{p}}{\left|{M}\right|}.$$

It is usual to consider one of these two values, $r_{\text {Abbe}}$ or $r_{\text {Nyq}}$, to estimate the lateral resolution of opto-digital imaging systems, depending on whether the diffraction spot sampling is above or below the Nyquist limit, considering as resolution limit the one with the highest value. Nonetheless, there is a range of sizes of both the pixel and the diffraction spot (see Fig. Fig. 1(c)) in which none of the limits expresses accurately the resolution limit of the system. Furthermore, due to the complexity of the global PSF, a simple formula for an easy calculation of the expected resolution limit is not yet available.

Aiming to avoid this lack, the following section describes a method to estimate the global resolution when the Airy disk and the pixel size are comparable spatial-resolution wise, and a simple mathematical formula is proposed.

3. Transfer of spatial frequencies

Assuming an opto-digital imaging system that, in good approximation, is LSI, its performance is fully characterized by its PSF, and therefore the final image distribution can be expressed according to the Eqs. (2) and (3). These successive convolutions in the spatial domain can be analyzed in a simpler way in the frequency domain through a Fourier transform. Accordingly, the frequency content of the final captured image is given by a product of functions,

$$\tilde{I}(u,v)=\left|{M}\right|^2\tilde{O}(Mu,Mv)\times \tilde{h}_\mathrm{diff}(u,v)\times\tilde{h}_\mathrm{sens}(u,v),$$
with $(u,v)$ the spatial-frequency coordinates. Each $\tilde {h}(u,v)$ function is the Fourier transform of the respective irradiance PSF, and is regularly named as Optical Transfer Function (OTF) [1]. Since the PSFs involved here are all real and even functions, the positive side of each OTF is equal to its modulus, the Modulation Transfer Function (MTF). The MTF describes how spatial frequencies are modulated and transferred in the image formation and acquisition process, and is the function commonly used for systems characterization [1,17].

The diffractive MTF is the Fourier transform of Eq. (4), with the functional form [17]

$$\mathrm{MTF}_\mathrm{diff}(u,v)=\frac{2}{\pi}\left[\mathrm{arccos}\left(\frac{\rho}{\rho_0}\right)-\frac{\rho}{\rho_0}\sqrt{1-\left(\frac{\rho}{\rho_0}\right)^2}\right],$$
where $\rho =\sqrt {u^2+v^2}$ is the radial frequency and $\rho _0$ is the diffractive cut-off frequency of the imaging system, which according to Abbe is $\rho _0=2 \mathrm {NA}/\lambda$ (see Eq. (6)).

The sensor MTF, including the pixel MTF and the sampling MTF, is the Fourier transform of Eq. (5),

$$\mathrm{MTF}_\mathrm{sens}(u,v)=\mathrm{MTF}_\mathrm{pix}(u,v)\times\mathrm{MTF}_\mathrm{sampl}(u,v)=\left[\mathrm{sinc}\left(u\frac{\Delta_\mathrm{p}}{M}\right)\mathrm{sinc}\left(v\frac{\Delta_\mathrm{p}}{M}\right)\right]^2,$$
with the sinc function defined as sinc$(t)=\sin (\pi t)/(\pi t)$.

These MTFs are displayed, for a particular case, in Fig. 2 together with the global MTF, which is obtained as $\mathrm {MTF}_\mathrm {global} (u,v)=\mathrm {MTF}_\mathrm {diff} (u,v)\times \mathrm {MTF}_\mathrm {sens} (u,v)$. In this example the three MTFs are similar in width, what results in a global MTF that is much narrower. Consequently, a stronger filtering of spatial frequencies occurs when considering the global system as compared with the filtering performed by diffraction or sensor considered individually. This simple analysis illustrates the importance of considering the combined performance of all the subsystems at the moment of determining the global spatial resolution of an opto-digital imaging system.

 figure: Fig. 2.

Fig. 2. Modulation Transfer Function of the global opto-digital imaging system. Solid-blue line for the global MTF. Dotted-red line represents the diffraction performance. MTF of the pixel represented in solid-yellow and dashed-purple the MTF for the sampling. The values of the spatial frequencies from which the MTFs are smaller than $10\%$ are indicated with a black circle.

Download Full Size | PDF

As said before the MTF defines the spatial resolution of an imaging system. The cut-off value after which the system introduces an unacceptable modulation of the transferred spatial frequency, represented in a decay of the MTF below a given number, dictates the actual cut-off frequency of the imaging system, whose mathematical inverse is the sought spatial resolution limit. Conventionally, the said cut-off frequency is given by the threshold value 0.1 [17,24].

When optical designers, or researchers, face the design of an opto-digital imaging experiment with off-the-shelf components, they must select the adequate objective (NA and magnification), relay optics and pixelated sensor. Such selection aims to meet the required optical specifications, like resolution, FOV or DOF. According to above statements, to predict the resolution for any combination of opto-digital components, the designer must solve numerically the equation

$$0.1=\mathrm{MTF}_\mathrm{diff} (u,v) \times\mathrm{MTF}_\mathrm{sens} (u,v),$$
which is somehow challenging, and then calculate the inverse, named here as $r_\mathrm {MTF}$, of the obtained 10% cut-off frequency. Note that this procedure contrasts strongly with the usual one consisting on calculating a single number by use of Eq. (6) or of Eq. (7).

4. Formula for the resolution limit for opto-digital imaging systems

To be able to calculate quickly and with good agreement the spatial resolution performance of an opto-digital imaging system, a heuristic mathematical expression is proposed here, which comes from a linear combination of Abbe and Nyquist resolution limits,

$$r_\mathrm{heur}=r_\mathrm{Abbe}+\frac{1}{2}r_\mathrm{Nyq}=\frac{\lambda}{2\,\mathrm{NA}}+\frac{\Delta_\mathrm{p}}{\left|{M}\right|}.$$

The underlying physics of this equation lies in the fact that Abbe considers approximately the half diameter of the Airy disk. Following the same idea, the half of the base of the sensor PSF, including footprint and sampling effects, is $\Delta _\mathrm {p}/\left|{M}\right|$ in the object space. As the base of the convolution of two functions of compact support is equal to the sum of the bases of the convolved functions, it is proposed to sum these two expressions to get the resolution limit, as expressed in the Eq. (12).

To contrast the validity of Eq. (12), we have calculated four alternative values for the resolution limit: $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$, and $r_\mathrm {Nyq}$. The four results are plotted in Fig. 3, for different values of NA and different pixel size in object space, $\Delta _\mathrm {p}/\left|{M}\right|$. Without any lack of generality, the study has been performed at a fixed value of wavelength 490 nm.

 figure: Fig. 3.

Fig. 3. Spatial resolution limit for different values of NA and pixel size in the object space. Blue surface shows the spatial resolution calculated as Eq. (12). Magenta surface is the spatial resolution computed as the mathematical inverse of the cut-off frequency corresponding to the 0.1 value of the global MTF. Cyan and gray surfaces are the Abbe’s and Nyquist’s criteria, respectively. The wavelength is set to 490 nm.

Download Full Size | PDF

In Fig. 3, one can recognize three different resolution regions. The first considered is the region in which the heuristic formula, Eq. (12), and the Abbe’s formula predict similar values for the resolution limit. We name this region, characterized by low values of $\Delta _\mathrm {p}/\left|{M}\right|$, as the small pixel region. In this region the opto-digital system is often considered to be diffraction limited, what has classically led to simply consider the Abbe’s criterion. However, even in this region, the MTF curve, and also the heuristic one, still lay above the Abbe curve, showing that the actual resolution limit is slightly larger than the one predicted by Abbe. This indicates that, even in diffraction-limited configurations, it is more appropriate to use the proposed Eq. (12) than just the Abbe’s resolution limit.

The same Fig. 3 shows a region, characterized by large values of NA and of $\Delta _\mathrm {p}/\left|{M}\right|$, in which the Nyquist’s resolution limit is larger than the others. We name this region as the large pixel region.

Around the hybrid region of Fig. 3, where the Abbe’s and Nyquist’s resolution limits are quite close to each other, the values of resolution limit provided by $r_\mathrm {heur}$ and $r_\mathrm {MTF}$ formulae are much greater than that predicted by Abbe or by Nyquist.

To ease the analysis, a slice of Fig. 3 is shown in Fig. 4, plotting the resolution limits for different values of NA and a fixed value $\Delta _\mathrm {p}/\left|{M}\right|=0.25\;\mathrm {\mu m }$, equivalent to, for instance, $\Delta _\mathrm {p}=5\;\mathrm {\mu m}$ and $\left|{M}\right|=20$. It is noted that the proposed resolution limit $r_\mathrm {heur}$ strongly approximates the curve corresponding to $r_\mathrm {MTF}$. To analyze this approximation quantitatively one can define a figure, named here as the dominance ratio $r_\mathrm {Nyq} / r_\mathrm {Abbe}$, for evaluation of the range in which Abbe’s and Nyquist’s resolutions limits are comparable. The boundaries of that range are chosen here as $0.25 \leq r_\mathrm {Nyq} / r_\mathrm {Abbe} \leq 4.0$, which are covered by the slice plotted in Fig. 4. Calculating within this plotted range the relative error between the proposed limit Eq. (12) and that computed from the MTF, $\left|{r_\mathrm {MTF}-r_\mathrm {heur}}\right|/r_\mathrm {MTF}$, the maximum relative error results $13\%$, while the average is $9\%$. It becomes clear that the proposed formula is a good approximation for the expected resolution limit when the resolution of the opto-digital system is governed comparably by the physical diffraction and digital camera features.

 figure: Fig. 4.

Fig. 4. Resolution limits for $\Delta _\mathrm {p}/\left|{M}\right|=0.25\;\mathrm {\mu m}$; slice computed from Fig. 3. Solid-blue line resolution limit predicted by Eq. (12). Solid-magenta line computed from mathematical inverse of $10\%$ MTF cut-of frequency. Dashed-cyan line Abbe’s criterion. Dotted-black line Nyquist’s criterion.

Download Full Size | PDF

5. Experimental validation

To study the reliability of the proposed figure of merit, $r_\mathrm {heur}$, an experiment was designed to measure the spatial resolution limit of an opto-digital system with variable NA and fixed pixel size. By changing the NA, the ratio between the Abbe’s and Nyquist’s resolution limits is tuned. In this way the three resolution regions can be studied with the same system. The NA of the microscope objective (MO), was tuned by gradually changing the aperture stop (AS) diameter. To make this in a flexible and accurate way, an afocal relay arrangement was placed to generate an optically conjugated plane of the AS. An iris aperture was located at this conjugate plane, so that changing the iris size directly implies a modification of the AS diameter and, therefore, of the effective NA.

The scheme of the proposed experimental setup is shown in Fig. 5. The microscope branch of this scheme is equivalent to that of a telecentric microscope with an infinity-corrected MO and a tube lens (TL), but inserting L1 and L2, the relay lenses. This branch is identified by the green beam. Additionally, to measure the iris diameter a beam splitter and a second afocal relay system, composed by L3 and L4 lenses, are placed. This second branch, represented by the red beam, allows the capture of an image of the iris.

 figure: Fig. 5.

Fig. 5. System to study the spatial resolution for different NA at fixed pixel size. The green branch is the microscope whose resolution is measured, with an iris stop conjugated to the AS plane of the MO. The red branch is used to measure the iris diameter by capturing its image.

Download Full Size | PDF

The choice of the MO, the L1 and L2 lenses, and the TL must satisfy that the Nyquist’s resolution limit dominates when the iris is fully open. To help this choice, Eqs. (6) and (7) are rewritten in terms of the parameters to be chosen. In dry objectives $\mathrm {NA}=\phi _\mathrm {AS}/2f_\mathrm {MO}$, with $f_\mathrm {MO}$ the focal length of the MO. The AS diameter is related to the iris diameter by the lateral magnification of the afocal system, $\phi _\mathrm {AS}=\phi _\mathrm {iris}\,f_1/f_2$. The focal length of the MO can be specified by the manufacturer. If it is not the case, it is obtained from its nominal magnification, $M_\mathrm {MO}$, and knowing its manufacturer, since each manufacturer refers to a certain TL focal length, $f_\mathrm {TL}$. Choosing a Nikon MO, which operates with $f_\mathrm {TL}=200\,\text {mm}$, the focal length of the MO is $f_\mathrm {MO}=200\,\text {mm}/M_\mathrm {MO}$. Therefore, Abbe’s resolution limit is rewritten as

$$r_\mathrm{Abbe}=\frac{\lambda}{\phi_\mathrm{iris}}\frac{200\,\text{mm}}{M_\mathrm{MO}}\frac{f_2}{f_1}.$$

Knowing that the lateral magnification of the microscope branch is $M=\left (f_1 f_\mathrm {TL}\right )/\left (f_2 f_\mathrm {MO}\right )$, the Nyquist’s resolution limit is

$$r_\mathrm{Nyq}=2\Delta_\mathrm{p}\frac{f_2\,200\,\text{mm}}{f_1 f_\mathrm{TL} M_\mathrm{MO}}.$$

Therefore, the dominance ratio is

$$\frac{r_\mathrm{Nyq}}{r_\mathrm{Abbe}}=\frac{2\Delta_\mathrm{p}\phi_\mathrm{iris}}{\lambda f_\mathrm{TL}}.$$

The camera used in the microscope branch is the CS2100M-USB model from Thorlabs; 16-bit, monochromatic, sCMOS-type sensor with $1920\times 1080$ pixels of size $\Delta _\mathrm {p}=5.04\;\mathrm {\mu m}$. The iris used has a maximum diameter of 18 mm. An LED with an emission spectrum centered at wavelength 490 nm and bandwidth of 40 nm was used as a illuminator. Considering these values for a fully opened iris, Eq. (15) results in $r_\mathrm {Nyq}/r_\mathrm {Abbe}=370 /f_\mathrm {TL}$, provided that $f_\mathrm {TL}$ is expressed in mm. In microscopy, $f_\mathrm {TL}$ is typically in the range of 160 to 200 mm, depending on the manufacturer. Taking $f_\mathrm {TL}=200$mm results in $r_\mathrm {Nyq}/r_\mathrm {Abbe}=1.85$, which is not large enough to consider that the Nyquist’s resolution limit dominates. Choosing $f_\mathrm {TL}=100\,\text {mm}$, $r_\mathrm {Nyq}/r_\mathrm {Abbe}=3.70$ is obtained, and under these conditions one can consider that diffractive effects are not relevant. For this reason, this value of $f_\mathrm {TL}$ has been chosen for our experimental setup.

Moreover, the MO and the L1 and L2 lenses must fulfill that the image of the iris fits inside the AS. Taking a Nikon MO of $\mathrm {NA}=0.45$ and magnification $10\times$, so that $f_\mathrm {MO}=20\,\text {mm}$, the diameter of its AS is calculated to be $\phi _\mathrm {AS}=18\,\text {mm}$. Since the maximum diameter of the iris is also this value, the magnification of the afocal system must be equal to the unity, thus satisfying $f_2/f_1=1$. They have been chosen to be $f_1=f_2=200\,\text {mm}$.

The camera for measuring the iris size is the Thorlabs model DCU223C, with a CCD sensor of pixel size $\Delta _\mathrm {p}=4.65\;\mathrm {\mu m}$ and area $5.80\times 4.92\,\text {mm}^2$. To measure the iris diameter, its image on the sensor plane must be smaller than the sensor size. For this reason, L3 and L4 focal lengths are chosen to be $f_3=200\,\text {mm}$ and $f_4=50\,\text {mm}$, so that the lateral magnification of this afocal system is $1/4$.

To measure the spatial resolution of the system, the slanted edge method [2426] is implemented in this work. This method is based on capturing the image of an edge that is slightly slanted with respect to the vertical axis of the sensor, so that each row of pixels across the edge is a sampling of the edge spread function (ESF). The ESF is undersampled considering one single row, so the pixel data near the edge are projected and composed to get a sampling improvement. The derivative of this ESF is computed to obtain the line spread function (LSF), equivalent to the PSF in the one-dimensional case. The MTF is finally obtained as the discrete Fourier transform of the LSF. The measurements were made using the Slant Edge MTF target from Thorlabs, consisting of a $5^{\mathrm {o}}$ slanted and L-shaped pattern. A set of images of this test was captured for different values of the iris diameter, which was also measured. The measurement uncertainty of the diameter is estimated to correspond to 5 pixels, with a value of 0.09 mm in the object space. Then, each image was computationally analyzed to obtain the MTF information. In Fig. 6 the captured image of the slanted edge, that of the iris and the corresponding MTF for two different pupil sizes are displayed. In both plots of the MTF in Fig. 6, a decay is observed as the spatial frequency increases. As one can expect, higher frequencies suffer greater attenuation. Of course, the MTF corresponding to smaller pupil size (smaller NA) decays faster than for larger pupil (larger NA). In the plots of the MTF a horizontal line has been drawn indicating the value 0.1, crossing the MTF curves at the spatial frequencies whose contrast is $10\%$. Below this value, it is considered that the higher frequencies are not well discerned in the image. This experimental cut-off frequency corresponds to the mathematical inverse of the experimental spatial resolution limit of the imaging system.

 figure: Fig. 6.

Fig. 6. Experimental measurement of the MTF. (a) Image and size of the iris. (b) Captured image of the slanted edge for that given pupil size. (c) Different iris size and (d) corresponding image of the edge. (e) Experimental MTF obtained from the edge in (b), as a function of spatial frequency in object space. (f) Experimental MTF of the edge in (d). The value of the cutoff frequency is indicated with a red circle, marking a decay of the MTF lower than $10\%$. Distances and frequencies are referenced to the object space. The zoomed-in areas enlarge an edge of the test.

Download Full Size | PDF

By capturing and analyzing a set of images equivalent to those in Fig. 6, the value of the experimental resolution limit, $r_\mathrm {exp}$, for different NA but constant pixel size, is measured. These experimental measurements are illustrated in Fig. 7, where we have plotted also the $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$ and $r_\mathrm {Nyq}$ curves.

 figure: Fig. 7.

Fig. 7. Experimental resolution limit for different NA computed from the MTF measured by the slanted edge method (red dots). These results are compared with $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$ and $r_\mathrm {Nyq}$.

Download Full Size | PDF

The comparison between the experimental values and those provided by the different theoretical formulae we come to the following conclusions. (a) The Abbe’s formula always underestimates the resolution limit. Said in other words, that formula always overestimates the lateral resolution of opto-digital imaging systems. This overestimation occurs even in the Abbe’s prevalence region (the small pixel region). As a consequence, the predicted maximum resolution of the optical system may seem experimentally unattainable when using this resolution criterion. (b) Along both the small pixel region and the hybrid region, the $r_\mathrm {heur}$ and the $r_\mathrm {MTF}$ formulae match accurately the experimental results. This confirms the utility of these formulae along a large range of experimental situations. Finally, in the large pixel region the $r_\mathrm {Nyq}$ formula successfully predicts the lateral resolution of the optical system. In such a case, if the pixel size is larger than the diffraction PSF, then the system behaviour can no longer be considered LSI. As a consequence, the MTF model and, therefore, the heuristic model are no longer valid. Another fact is that, at this region, aliasing effects could be appreciated if the sampling rate is less frequent than the Nyquist condition. Obviously, in this region the effects of sampling dominate over the diffraction ones. Having this in mind, we have calculated along the small pixel and the hybrid regions, excluding the last four data of Fig. 7, the relative error between the experimental data and those calculated with the $r_\mathrm {heur}$ formula, which average value results to be $10\%$. Thus, the general proposed criterion for estimating the lateral resolution of the opto-digital system, $r_\mathrm {heur}$, holds for cases in which sampling is not far below the Nyquist criterion. In that extreme case, Nyquist’s formula still provides the best agreement with the experimental value.

Finally, as the image of an USAF test target is most commonly used for measuring the lateral resolution over the slanted edge method, the last experiment was repeated using a high resolution USAF–1951 chart (Product 58–198 Edmund Optics) (see Fig. 8). The results are shown in Fig. 9, and follow the same behaviour in terms of lateral resolution as the measurements obtained with the slanted edge.

 figure: Fig. 8.

Fig. 8. Experimental measurement of the lateral resolution with an USAF–1951 chart. (a) Image and size of the iris. (b) Captured image of the chart for that given pupil size. (c) Different iris size and (d) corresponding image of the chart. (e) Mean intensity along the green rectangle drawn in (b). (f) Mean intensity along the rectangle in (d). Group (G) and element (E) corresponding to each intensity profile are indicated. Distances are referenced to the object space. The zoomed-in areas enlarge elements of the chart.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Experimental resolution limit for different NA measured from the captured images of an USAF–1951 chart (red dots). Note that not all the spatial frequencies are available in USAF charts. For this reason, the experimental results have stepped structure. These results are compared with $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$ and $r_\mathrm {Nyq}$.

Download Full Size | PDF

6. Conclusion

Classical formulas commonly used to predict the spatial resolution of opto-digital imaging systems often overestimate their resolving power. This can lead to frustration for researchers or system designers, as the expected resolution seems to be unachievable. To provide a more realistic value of opto-digital imaging systems resolution limit, a simple formula is proposed. This formula, named here as heuristic formula, is theoretically derived and verified via experiments. The theoretical derivation considers the LSI nature of opto-digital systems, which gives similar importance to three factors (diffraction PSF, pixel size and sampling) that act in cascade on the optical signal. Under these conditions, the heuristic formula is capable to accurately predict, through a simple arithmetic operation, the resolution limit of opto-digital imaging systems. The predicted values were verified by means of an experiment in which, by adjusting the diameter of an iris aperture stop, a wide range of relative sizes of the diffraction spot and the pixel were considered. Furthermore, two approaches for measuring the resolution limit of the system were carried out: firstly, by using a slanted edge and measuring the MTF and, secondly, the standard method based on imaging a USAF test target.

Funding

Universidad Nacional de Colombia (Hermes 347 grants (53712,50069,49570)); Generalitat Valenciana (PROMETEO/2019/048); European Regional Development Fund (RTI2018-099041-B-I00); Ministerio de Ciencia, Innovación y Universidades (RTI2018-099041-B-I00).

Disclosures

The authors declare no conflicts of interest.

Data Availability

The data presented in this study are contained within the article.

References

1. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005), 3rd ed.

2. R. H. Vollmerhausen and R. G. Driggers, Analysis of Sampled Image Systems, SPIE Press (SPIE Press, Bellingham, 1998).

3. E. Abbe, “The Relation of Aperture and Power in the Microscope (continued),” J. R. Microsc. Soc. 2(4), 460–473 (1882). [CrossRef]  

4. L. Rayleigh, “On the theory of optical images, with special references to the microscope,” Philos. Mag. Ser. 5 42(255), 167–195 (1896). [CrossRef]  

5. H. Nyquist, “Certain topics in telegraph transmission theory,” Trans. Am. Inst. Electr. Eng. 47(2), 617–644 (1928). [CrossRef]  

6. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

7. S.-H. Lee, J. Y. Shin, A. Lee, and C. Bustamante, “Counting single photoactivatable fluorescent molecules by photoactivated localization microscopy (palm),” Proc. Natl. Acad. Sci. 109(43), 17436–17441 (2012). [CrossRef]  

8. M. Rust and B. M. X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

9. L. Galdón, G. Saavedra, J. Garcia-Sucerquia, M. Martínez-Corral, and E. Sánchez-Ortiga, “Fourier lightfield microscopy: a practical design guide,” Appl. Opt. 61(10), 2558–2564 (2022). [CrossRef]  

10. L. Galdon, H. Yun, G. Saavedra, J. Garcia-Sucerquia, J. C. Barreiro, M. Martinez-Corral, and E. Sanchez-Ortiga, “Handheld and Cost-Effective Fourier Lightfield Microscope,” Sensors 22(4), 1459 (2022). [CrossRef]  

11. G. Scrofani, J. Sola-Pikabea, A. Llavador, E. Sanchez-Ortiga, J. Barreiro, G. Saavedra, J. Garcia-Sucerquia, and M. Martínez-Corral, “FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples,” Biomed. Opt. Express 9(1), 335–346 (2018). [CrossRef]  

12. R. H. Webb, “Confocal optical microscopy,” Rep. Prog. Phys. 59(3), 427–471 (1996). [CrossRef]  

13. C. J. R. Sheppard and C. J. Cogswell, “Three-dimensional image formation in confocal microscopy,” J. Microsc. 159(2), 179–194 (1990). [CrossRef]  

14. G. Cox and C. J. Sheppard, “Practical limits of resolution in confocal and non-linear microscopy,” Microsc. Res. Tech. 63(1), 18–22 (2004). [CrossRef]  

15. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]  

16. J. B. Pawley, Handbook of Biological Confocal Microscopy (Springer, New York, NY, 1995).

17. G. D. Boreman, Modulation Transfer Function in Optical and Electro-optical Systems (SPIE Press Bellingham, Washington, 2001).

18. B. Javidi, A. Markman, and S. Rawat, “Automatic multicell identification using a compact lensless single and double random phase encoding system,” Appl. Opt. 57(7), B190–B196 (2018). [CrossRef]  

19. P. M. Douglass, T. O’Connor, and B. Javidi, “Automated sickle cell disease identification in human red blood cells using a lensless single random phase encoding biosensor and convolutional neural networks,” Opt. Express 30(20), 35965–35977 (2022). [CrossRef]  

20. T. O’Connor, C. Hawxhurst, L. M. Shor, and B. Javidi, “Red blood cell classification in lensless single random phase encoding using convolutional neural networks,” Opt. Express 28(22), 33504–33515 (2020). [CrossRef]  

21. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics (Wiley, New York, 1978).

22. S. K. Park, R. Schowengerdt, and M.-A. Kaczynski, “Modulation-transfer-function analysis for sampled image systems,” Appl. Opt. 23(15), 2572–2582 (1984). [CrossRef]  

23. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, 2005), 7th ed.

24. M. Estribeau and P. Magnan, “Fast MTF measurement of CMOS imagers using ISO 12333 slanted-edge methodology,” Proc. SPIE 5251, 243–252 (2004). [CrossRef]  

25. ISO, “Photography: Electronic still picture imaging – Resolution and spatial frequency responses,” ISO 12233:2017, International Organization for Standardization, Geneva, CH (2017).

26. E. Buhr, S. Günther-Kohfahl, and U. Neitzel, “Accuracy of a simple method for deriving the presampled modulation transfer function of a digital radiographic system from an edge image,” Med. Phys. 30(9), 2323–2331 (2003). [CrossRef]  

Data Availability

The data presented in this study are contained within the article.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Global PSF of the opto-digital imaging system. We have plotted the diffractive PSF (Airy disk) in dotted-red line, the pixel PSF (rectangle of width $\Delta _\mathrm {p}$) in solid-yellow line, and the sampling PSF (rectangle of width $\Delta _\mathrm {p}$) in dashed-purple line. Aditionally, we have drawn the total PSF of the sensor (triangle of width $2\Delta _\mathrm {p}$) with dashed-green line. Finally, the global PSF of the system is plotted with solid-blue line. (a) Case dominated by diffraction; (b) Case dominated by pixel size; (c) Hybrid case.
Fig. 2.
Fig. 2. Modulation Transfer Function of the global opto-digital imaging system. Solid-blue line for the global MTF. Dotted-red line represents the diffraction performance. MTF of the pixel represented in solid-yellow and dashed-purple the MTF for the sampling. The values of the spatial frequencies from which the MTFs are smaller than $10\%$ are indicated with a black circle.
Fig. 3.
Fig. 3. Spatial resolution limit for different values of NA and pixel size in the object space. Blue surface shows the spatial resolution calculated as Eq. (12). Magenta surface is the spatial resolution computed as the mathematical inverse of the cut-off frequency corresponding to the 0.1 value of the global MTF. Cyan and gray surfaces are the Abbe’s and Nyquist’s criteria, respectively. The wavelength is set to 490 nm.
Fig. 4.
Fig. 4. Resolution limits for $\Delta _\mathrm {p}/\left|{M}\right|=0.25\;\mathrm {\mu m}$; slice computed from Fig. 3. Solid-blue line resolution limit predicted by Eq. (12). Solid-magenta line computed from mathematical inverse of $10\%$ MTF cut-of frequency. Dashed-cyan line Abbe’s criterion. Dotted-black line Nyquist’s criterion.
Fig. 5.
Fig. 5. System to study the spatial resolution for different NA at fixed pixel size. The green branch is the microscope whose resolution is measured, with an iris stop conjugated to the AS plane of the MO. The red branch is used to measure the iris diameter by capturing its image.
Fig. 6.
Fig. 6. Experimental measurement of the MTF. (a) Image and size of the iris. (b) Captured image of the slanted edge for that given pupil size. (c) Different iris size and (d) corresponding image of the edge. (e) Experimental MTF obtained from the edge in (b), as a function of spatial frequency in object space. (f) Experimental MTF of the edge in (d). The value of the cutoff frequency is indicated with a red circle, marking a decay of the MTF lower than $10\%$. Distances and frequencies are referenced to the object space. The zoomed-in areas enlarge an edge of the test.
Fig. 7.
Fig. 7. Experimental resolution limit for different NA computed from the MTF measured by the slanted edge method (red dots). These results are compared with $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$ and $r_\mathrm {Nyq}$.
Fig. 8.
Fig. 8. Experimental measurement of the lateral resolution with an USAF–1951 chart. (a) Image and size of the iris. (b) Captured image of the chart for that given pupil size. (c) Different iris size and (d) corresponding image of the chart. (e) Mean intensity along the green rectangle drawn in (b). (f) Mean intensity along the rectangle in (d). Group (G) and element (E) corresponding to each intensity profile are indicated. Distances are referenced to the object space. The zoomed-in areas enlarge elements of the chart.
Fig. 9.
Fig. 9. Experimental resolution limit for different NA measured from the captured images of an USAF–1951 chart (red dots). Note that not all the spatial frequencies are available in USAF charts. For this reason, the experimental results have stepped structure. These results are compared with $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$ and $r_\mathrm {Nyq}$.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = [ 1 | M | 2 O ( x M , y M ) h d i f f ( x , y ) ] h s e n s ( x , y ) ,
h g l o b ( x , y ) = h d i f f ( x , y ) h s e n s ( x , y ) .
I ( x , y ) = 1 | M | 2 O ( x M , y M ) h g l o b ( x , y ) ,
h d i f f ( x , y ) = | ( ϕ A S 2 ) 2 D i s k ( r N A λ ) | 2 ,
h s e n s ( x , y ) = h p i x ( x , y ) h s a m p l ( x , y ) = r e c t ( x Δ p , y Δ p ) r e c t ( x Δ p , y Δ p ) ,
r A b b e = λ 2 N A .
r N y q = 2 Δ p | M | .
I ~ ( u , v ) = | M | 2 O ~ ( M u , M v ) × h ~ d i f f ( u , v ) × h ~ s e n s ( u , v ) ,
M T F d i f f ( u , v ) = 2 π [ a r c c o s ( ρ ρ 0 ) ρ ρ 0 1 ( ρ ρ 0 ) 2 ] ,
M T F s e n s ( u , v ) = M T F p i x ( u , v ) × M T F s a m p l ( u , v ) = [ s i n c ( u Δ p M ) s i n c ( v Δ p M ) ] 2 ,
0.1 = M T F d i f f ( u , v ) × M T F s e n s ( u , v ) ,
r h e u r = r A b b e + 1 2 r N y q = λ 2 N A + Δ p | M | .
r A b b e = λ ϕ i r i s 200 mm M M O f 2 f 1 .
r N y q = 2 Δ p f 2 200 mm f 1 f T L M M O .
r N y q r A b b e = 2 Δ p ϕ i r i s λ f T L .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.