Abstract
The resolution limit achievable with an optical system is a fundamental piece of information when characterizing its performance, mainly in case of microscopy imaging. Usually this information is given in the form of a distance, often expressed in microns, or in the form of a cutoff spatial frequency, often expressed in line pairs per mm. In modern imaging systems, where the final image is collected by pixelated digital cameras, the resolution limit is determined by the performance of both, the optical systems and the digital sensor. Usually, one of these factors is considered to be prevalent over the other for estimating the spatial resolution, leading to the global performance of the imaging system ruled by either the classical Abbe resolution limit, based on physical diffraction, or by the Nyquist resolution limit, based on the digital sensor features. This estimation fails significantly to predict the global performance of opto-digital imaging systems, like 3D microscopes, where none of the factors is negligible. In that case, which indeed is the most common, neither the Abbe formula nor the Nyquist formula provide by themselves a reliable prediction for the resolution limit. This is a serious drawback since systems designers often use those formulae as design input parameters. Aiming to overcome this lack, a simple mathematical expression obtained by finely articulating the Abbe and Nyquist formulas, to easily predict the spatial resolution limit of opto-digital imaging systems, is proposed here. The derived expression is tested experimentally, and shows to be valid in a broad range of opto-digital combinations.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Imaging systems are classified in terms of their optical features. Field of view (FOV), depth of field (DOF), magnification and spatial resolution, are mainly considered at the time of ranking imaging systems of the same type. Among these features, the spatial resolution limit, understood as the capability of the imaging system to distinguish the images of two adjacent object points, is perhaps the most studied figure in any type of imaging system since the very onset of the optical sciences.
This interest has been kept very active until today, embracing the evolution demanded by the steps forward in the imaging technology. For instance, in modern imaging systems, where the irradiance detectors are discretized digital cameras, the study of the spatial resolution must consider the performance of both, the optical system and the digital sensor. The optical performance is determined by wave diffraction, through the shape and size of the diffractive point-spread funtion (PSF) [1]. The digital performance is mainly determined by the pixel size and spacing through the Nyquist limit [2]. It is usually assumed that the spatial resolution performance of an opto-digital imaging system is ruled only by the dominant effect between the optical and the digital ones. Thus only wave optics or pixel size effects are alternatively considered, led by existing criteria associated exclusively with diffraction or with pixelation [3–5].
To give an example, in digital photography pixels are commonly larger than the diameter of the first lobe of the diffractive PSF. Consequently, pixel size is usually considered to be the limiting factor in terms of spatial resolution. Note however that even in the case of significant difference between Nyquist and PSF sizes, the wave-optics influence cannot be entirely discarded. This is more evident in modern digital photography where sensors used are progressively diminishing their pixel size.
On the contrary, in conventional microscopy the diffractive PSF tends to be properly sampled by the pixels, so diffraction is typically considered to be the limiting factor. Also here the decision of discarding one effect (Nyquist in this case) can lead to an overestimation of the resolution power. Furthermore, in optical microscopes there are many cases in which the PSF and the pixel size of the camera are comparable. For instance, in microscopy techniques working in low-photon regime the pixel size of the sensor must be compromised in order to increase the number of photons detected by each pixel, as occurs in single molecule localization microscopy (SMLM) based techniques [6–8]. In the same way, any microscope with low magnification relative to its numerical aperture possesses a diffractive PSF size close to or even below the Nyquist sampling requirement. That is the case of Fourier Lightfield Microscopy (FLMic) [9–11], in which the combined effect of dividing the microscope aperture in smaller parts by a lenslet array and reducing the focal length of the tube lens results in a diffraction spot of dimensions comparable with the pixel size. Even more, in scanning imaging systems, such as confocal laser scanning microscopes (CLSM) [12–14], or stimulated emission depletion microscopy (STED) [15], the resolution of the final image can depend on the capturing conditions. In these scanning systems, the size of the pixel is chosen by the user when selecting the scanning step. This decision is often taken by choosing an scanning step that matches the Nyquist sampling criterion of the PSF of the system [16], which, slightly in CLSM and highly in STED, is smaller than the widefield PSF. In such cases in which diffraction and pixelation limits are close to each other, both effects must be taken into account to provide a proper resolution limit of the optics of the imaging system, as it has been pointed out in the past by theoretical analysis performed in the frequency domain [17]. Naturally, new optical approaches that do not require lenses may remedy issues with lens PSF [18–20].
When projecting the practical implementation of an imaging system, usually by using off-the-shelf components, researchers or system designers need to have easy formulas for the preliminary evaluation of its optical features. Concerning the spatial resolution, in most cases neither the diffraction nor the pixelation effects are negligible. Then it is necessary to define a figure that considers the combined action of both effects. Aimed to provide insight in this regard, in this work an analysis of the combined performance of the physical diffraction and the digital camera, as for both having different or even weight on the global spatial resolution performance, is presented. The analysis leads to a simple mathematical expression that finely articulates the well-known Abbe’s and Nyquist’s resolution limits [3,5] to forecast the global spatial-resolution performance of an opto-digital imaging system. The utility of proposed formula is contrasted experimentally.
2. PSF of opto-digital imaging systems
The spatial resolution of a modern imaging system is characterized by the performance of the global setup, including the physical diffraction and the sensor features. Assuming that the global system is, in good approximation, linear and shift invariant (LSI), that performance is analyzed mathematically through the global and the individual PSFs, represented here by functions $h(x,y)$, which describe the response of the system to a point object [1]. Note that in LSI systems the response to a laterally-displaced point object is also shifted while preserving its shape [1,21]. Given that in an opto-digital imaging system the sensor collects the image provided by the optics of the system, and accepting that the sensor response can be characterized through a sensor PSF, one can write
Then, taking profit from the associative property of convolution product, one can define a global PSF as the convolution between two individual PSFs,
In these terms, the two-dimensional (2D) irradiance distribution of the image captured by the opto-digital imaging system is given by,
The diffraction PSF should take into account both, the effect of apertures and diffracting screens on the wave propagation, and also the effect of optical aberrations. Despite the fact that the aberrations effects are an important issue, in this study we assume that the design of the optical elements is good enough to compensate largely for the aberrations, so that we can neglect aberrations effects and focus our study on wave propagation. Then the incoherent diffractive PSF is equal to the square modulus of the Fourier transform of the transmittance of the aperture stop (AS) of the system. Typically, the AS of an imaging system has circular shape, so its Fourier transform is the Airy disk function, hence the diffraction PSF, as evaluated at the image plane, is:
As far as it is concerned to the sensor PSF, a point object should be considered to ideally correspond to a pixel in the sensor plane. The detector footprint PSF is equivalent to a pixel, that is, a rectangle function of width the pixel size, $\Delta _\mathrm {p}$ [2,17]. In addition, the sampling inherent in any detection system due to the discretization of pixels must be considered. In the case of a pixelated digital camera, a sample is taken at each spatial interval equal to the pixel size. This sampling causes the system not to be strictly LSI; the captured image is different if the image of the input point object falls over the center or the edge of a given pixel. Hence, the recorded irradiance distribution is different depending on the position in which the image falls over the sensor. This fact violates the shift-invariance assumption, thus forbidding stricto sensu the use of direct convolution operations. To recover the possibility of using the LSI formalism it is necessary to generalize and define an average sampling PSF [22]. For this purpose, the PSF of the sampling by a square pixel sensor is considered to be a rectangle function whose width is the sampling intervals [17]. In this study we consider the case of contiguous pixels, so these intervals are equal to the pixel width. The sensor PSF is therefore the result of the convolution between the detector footprint PSF and the sampling PSF,
As shown in Fig. 1, for three different cases, the global PSF is a modified Airy disk, broadened by convolution with the sensor PSF. This broadening is because the convolution of functions of compact support results in a function whose width is equal to the sum of the widths of the original functions. The Fig. 1(a) shows the case in which the diffraction PSF is much wider than the sensor PSF, and therefore the global PSF is very much alike the diffraction one. In contrast, Fig. 1(b) shows a case where the sensor PSF is much wider than the diffraction PSF, giving rise to a global PSF whose size and shape is similar to those of the sensor one. Finally, in Fig. 1(c) we show an intermediate, or hybrid, case in which none effect is dominant, resulting then in a global PSF with shape and size that are comparably influenced by diffraction and sensor features.
The lateral resolution of systems governed strictly either by the diffraction or by the pixelated digital camera has been historically studied. There exist well-known formulae for the easy calculation of the resolution limit. In case the resolution is assumed to be limited only by wave diffraction, Abbe’s resolution limit is the key figure [3]. Abbe’s resolution limit evaluates which spatial frequencies are filtered out, as a function of the size and shape of the aperture stop. For a circular AS, its radius provides a certain spatial frequency cut-off, indicating that higher spatial frequencies are not transmitted to the final image, thus limiting the spatial resolution. The equivalent in the spatial domain of this cut-off frequency is the Abbe’s resolution limit [23],
Conversely, if the effects of diffraction and aberrations are negligible as compared with those of the pixel size, the resolution is determined by the pixel structure of the sensor. According to the Nyquist criterion [5], two image points are resolvable if they are captured by different pixels and there is at least one pixel in between them. In the image space this distance is $2\Delta _\mathrm {p}$, so in the object space the Nyquist resolution limit is
It is usual to consider one of these two values, $r_{\text {Abbe}}$ or $r_{\text {Nyq}}$, to estimate the lateral resolution of opto-digital imaging systems, depending on whether the diffraction spot sampling is above or below the Nyquist limit, considering as resolution limit the one with the highest value. Nonetheless, there is a range of sizes of both the pixel and the diffraction spot (see Fig. Fig. 1(c)) in which none of the limits expresses accurately the resolution limit of the system. Furthermore, due to the complexity of the global PSF, a simple formula for an easy calculation of the expected resolution limit is not yet available.
Aiming to avoid this lack, the following section describes a method to estimate the global resolution when the Airy disk and the pixel size are comparable spatial-resolution wise, and a simple mathematical formula is proposed.
3. Transfer of spatial frequencies
Assuming an opto-digital imaging system that, in good approximation, is LSI, its performance is fully characterized by its PSF, and therefore the final image distribution can be expressed according to the Eqs. (2) and (3). These successive convolutions in the spatial domain can be analyzed in a simpler way in the frequency domain through a Fourier transform. Accordingly, the frequency content of the final captured image is given by a product of functions,
The diffractive MTF is the Fourier transform of Eq. (4), with the functional form [17]
The sensor MTF, including the pixel MTF and the sampling MTF, is the Fourier transform of Eq. (5),
These MTFs are displayed, for a particular case, in Fig. 2 together with the global MTF, which is obtained as $\mathrm {MTF}_\mathrm {global} (u,v)=\mathrm {MTF}_\mathrm {diff} (u,v)\times \mathrm {MTF}_\mathrm {sens} (u,v)$. In this example the three MTFs are similar in width, what results in a global MTF that is much narrower. Consequently, a stronger filtering of spatial frequencies occurs when considering the global system as compared with the filtering performed by diffraction or sensor considered individually. This simple analysis illustrates the importance of considering the combined performance of all the subsystems at the moment of determining the global spatial resolution of an opto-digital imaging system.
As said before the MTF defines the spatial resolution of an imaging system. The cut-off value after which the system introduces an unacceptable modulation of the transferred spatial frequency, represented in a decay of the MTF below a given number, dictates the actual cut-off frequency of the imaging system, whose mathematical inverse is the sought spatial resolution limit. Conventionally, the said cut-off frequency is given by the threshold value 0.1 [17,24].
When optical designers, or researchers, face the design of an opto-digital imaging experiment with off-the-shelf components, they must select the adequate objective (NA and magnification), relay optics and pixelated sensor. Such selection aims to meet the required optical specifications, like resolution, FOV or DOF. According to above statements, to predict the resolution for any combination of opto-digital components, the designer must solve numerically the equation
which is somehow challenging, and then calculate the inverse, named here as $r_\mathrm {MTF}$, of the obtained 10% cut-off frequency. Note that this procedure contrasts strongly with the usual one consisting on calculating a single number by use of Eq. (6) or of Eq. (7).4. Formula for the resolution limit for opto-digital imaging systems
To be able to calculate quickly and with good agreement the spatial resolution performance of an opto-digital imaging system, a heuristic mathematical expression is proposed here, which comes from a linear combination of Abbe and Nyquist resolution limits,
The underlying physics of this equation lies in the fact that Abbe considers approximately the half diameter of the Airy disk. Following the same idea, the half of the base of the sensor PSF, including footprint and sampling effects, is $\Delta _\mathrm {p}/\left|{M}\right|$ in the object space. As the base of the convolution of two functions of compact support is equal to the sum of the bases of the convolved functions, it is proposed to sum these two expressions to get the resolution limit, as expressed in the Eq. (12).
To contrast the validity of Eq. (12), we have calculated four alternative values for the resolution limit: $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$, and $r_\mathrm {Nyq}$. The four results are plotted in Fig. 3, for different values of NA and different pixel size in object space, $\Delta _\mathrm {p}/\left|{M}\right|$. Without any lack of generality, the study has been performed at a fixed value of wavelength 490 nm.
In Fig. 3, one can recognize three different resolution regions. The first considered is the region in which the heuristic formula, Eq. (12), and the Abbe’s formula predict similar values for the resolution limit. We name this region, characterized by low values of $\Delta _\mathrm {p}/\left|{M}\right|$, as the small pixel region. In this region the opto-digital system is often considered to be diffraction limited, what has classically led to simply consider the Abbe’s criterion. However, even in this region, the MTF curve, and also the heuristic one, still lay above the Abbe curve, showing that the actual resolution limit is slightly larger than the one predicted by Abbe. This indicates that, even in diffraction-limited configurations, it is more appropriate to use the proposed Eq. (12) than just the Abbe’s resolution limit.
The same Fig. 3 shows a region, characterized by large values of NA and of $\Delta _\mathrm {p}/\left|{M}\right|$, in which the Nyquist’s resolution limit is larger than the others. We name this region as the large pixel region.
Around the hybrid region of Fig. 3, where the Abbe’s and Nyquist’s resolution limits are quite close to each other, the values of resolution limit provided by $r_\mathrm {heur}$ and $r_\mathrm {MTF}$ formulae are much greater than that predicted by Abbe or by Nyquist.
To ease the analysis, a slice of Fig. 3 is shown in Fig. 4, plotting the resolution limits for different values of NA and a fixed value $\Delta _\mathrm {p}/\left|{M}\right|=0.25\;\mathrm {\mu m }$, equivalent to, for instance, $\Delta _\mathrm {p}=5\;\mathrm {\mu m}$ and $\left|{M}\right|=20$. It is noted that the proposed resolution limit $r_\mathrm {heur}$ strongly approximates the curve corresponding to $r_\mathrm {MTF}$. To analyze this approximation quantitatively one can define a figure, named here as the dominance ratio $r_\mathrm {Nyq} / r_\mathrm {Abbe}$, for evaluation of the range in which Abbe’s and Nyquist’s resolutions limits are comparable. The boundaries of that range are chosen here as $0.25 \leq r_\mathrm {Nyq} / r_\mathrm {Abbe} \leq 4.0$, which are covered by the slice plotted in Fig. 4. Calculating within this plotted range the relative error between the proposed limit Eq. (12) and that computed from the MTF, $\left|{r_\mathrm {MTF}-r_\mathrm {heur}}\right|/r_\mathrm {MTF}$, the maximum relative error results $13\%$, while the average is $9\%$. It becomes clear that the proposed formula is a good approximation for the expected resolution limit when the resolution of the opto-digital system is governed comparably by the physical diffraction and digital camera features.
5. Experimental validation
To study the reliability of the proposed figure of merit, $r_\mathrm {heur}$, an experiment was designed to measure the spatial resolution limit of an opto-digital system with variable NA and fixed pixel size. By changing the NA, the ratio between the Abbe’s and Nyquist’s resolution limits is tuned. In this way the three resolution regions can be studied with the same system. The NA of the microscope objective (MO), was tuned by gradually changing the aperture stop (AS) diameter. To make this in a flexible and accurate way, an afocal relay arrangement was placed to generate an optically conjugated plane of the AS. An iris aperture was located at this conjugate plane, so that changing the iris size directly implies a modification of the AS diameter and, therefore, of the effective NA.
The scheme of the proposed experimental setup is shown in Fig. 5. The microscope branch of this scheme is equivalent to that of a telecentric microscope with an infinity-corrected MO and a tube lens (TL), but inserting L1 and L2, the relay lenses. This branch is identified by the green beam. Additionally, to measure the iris diameter a beam splitter and a second afocal relay system, composed by L3 and L4 lenses, are placed. This second branch, represented by the red beam, allows the capture of an image of the iris.
The choice of the MO, the L1 and L2 lenses, and the TL must satisfy that the Nyquist’s resolution limit dominates when the iris is fully open. To help this choice, Eqs. (6) and (7) are rewritten in terms of the parameters to be chosen. In dry objectives $\mathrm {NA}=\phi _\mathrm {AS}/2f_\mathrm {MO}$, with $f_\mathrm {MO}$ the focal length of the MO. The AS diameter is related to the iris diameter by the lateral magnification of the afocal system, $\phi _\mathrm {AS}=\phi _\mathrm {iris}\,f_1/f_2$. The focal length of the MO can be specified by the manufacturer. If it is not the case, it is obtained from its nominal magnification, $M_\mathrm {MO}$, and knowing its manufacturer, since each manufacturer refers to a certain TL focal length, $f_\mathrm {TL}$. Choosing a Nikon MO, which operates with $f_\mathrm {TL}=200\,\text {mm}$, the focal length of the MO is $f_\mathrm {MO}=200\,\text {mm}/M_\mathrm {MO}$. Therefore, Abbe’s resolution limit is rewritten as
Knowing that the lateral magnification of the microscope branch is $M=\left (f_1 f_\mathrm {TL}\right )/\left (f_2 f_\mathrm {MO}\right )$, the Nyquist’s resolution limit is
Therefore, the dominance ratio is
The camera used in the microscope branch is the CS2100M-USB model from Thorlabs; 16-bit, monochromatic, sCMOS-type sensor with $1920\times 1080$ pixels of size $\Delta _\mathrm {p}=5.04\;\mathrm {\mu m}$. The iris used has a maximum diameter of 18 mm. An LED with an emission spectrum centered at wavelength 490 nm and bandwidth of 40 nm was used as a illuminator. Considering these values for a fully opened iris, Eq. (15) results in $r_\mathrm {Nyq}/r_\mathrm {Abbe}=370 /f_\mathrm {TL}$, provided that $f_\mathrm {TL}$ is expressed in mm. In microscopy, $f_\mathrm {TL}$ is typically in the range of 160 to 200 mm, depending on the manufacturer. Taking $f_\mathrm {TL}=200$mm results in $r_\mathrm {Nyq}/r_\mathrm {Abbe}=1.85$, which is not large enough to consider that the Nyquist’s resolution limit dominates. Choosing $f_\mathrm {TL}=100\,\text {mm}$, $r_\mathrm {Nyq}/r_\mathrm {Abbe}=3.70$ is obtained, and under these conditions one can consider that diffractive effects are not relevant. For this reason, this value of $f_\mathrm {TL}$ has been chosen for our experimental setup.
Moreover, the MO and the L1 and L2 lenses must fulfill that the image of the iris fits inside the AS. Taking a Nikon MO of $\mathrm {NA}=0.45$ and magnification $10\times$, so that $f_\mathrm {MO}=20\,\text {mm}$, the diameter of its AS is calculated to be $\phi _\mathrm {AS}=18\,\text {mm}$. Since the maximum diameter of the iris is also this value, the magnification of the afocal system must be equal to the unity, thus satisfying $f_2/f_1=1$. They have been chosen to be $f_1=f_2=200\,\text {mm}$.
The camera for measuring the iris size is the Thorlabs model DCU223C, with a CCD sensor of pixel size $\Delta _\mathrm {p}=4.65\;\mathrm {\mu m}$ and area $5.80\times 4.92\,\text {mm}^2$. To measure the iris diameter, its image on the sensor plane must be smaller than the sensor size. For this reason, L3 and L4 focal lengths are chosen to be $f_3=200\,\text {mm}$ and $f_4=50\,\text {mm}$, so that the lateral magnification of this afocal system is $1/4$.
To measure the spatial resolution of the system, the slanted edge method [24–26] is implemented in this work. This method is based on capturing the image of an edge that is slightly slanted with respect to the vertical axis of the sensor, so that each row of pixels across the edge is a sampling of the edge spread function (ESF). The ESF is undersampled considering one single row, so the pixel data near the edge are projected and composed to get a sampling improvement. The derivative of this ESF is computed to obtain the line spread function (LSF), equivalent to the PSF in the one-dimensional case. The MTF is finally obtained as the discrete Fourier transform of the LSF. The measurements were made using the Slant Edge MTF target from Thorlabs, consisting of a $5^{\mathrm {o}}$ slanted and L-shaped pattern. A set of images of this test was captured for different values of the iris diameter, which was also measured. The measurement uncertainty of the diameter is estimated to correspond to 5 pixels, with a value of 0.09 mm in the object space. Then, each image was computationally analyzed to obtain the MTF information. In Fig. 6 the captured image of the slanted edge, that of the iris and the corresponding MTF for two different pupil sizes are displayed. In both plots of the MTF in Fig. 6, a decay is observed as the spatial frequency increases. As one can expect, higher frequencies suffer greater attenuation. Of course, the MTF corresponding to smaller pupil size (smaller NA) decays faster than for larger pupil (larger NA). In the plots of the MTF a horizontal line has been drawn indicating the value 0.1, crossing the MTF curves at the spatial frequencies whose contrast is $10\%$. Below this value, it is considered that the higher frequencies are not well discerned in the image. This experimental cut-off frequency corresponds to the mathematical inverse of the experimental spatial resolution limit of the imaging system.
By capturing and analyzing a set of images equivalent to those in Fig. 6, the value of the experimental resolution limit, $r_\mathrm {exp}$, for different NA but constant pixel size, is measured. These experimental measurements are illustrated in Fig. 7, where we have plotted also the $r_\mathrm {heur}$, $r_\mathrm {MTF}$, $r_\mathrm {Abbe}$ and $r_\mathrm {Nyq}$ curves.
The comparison between the experimental values and those provided by the different theoretical formulae we come to the following conclusions. (a) The Abbe’s formula always underestimates the resolution limit. Said in other words, that formula always overestimates the lateral resolution of opto-digital imaging systems. This overestimation occurs even in the Abbe’s prevalence region (the small pixel region). As a consequence, the predicted maximum resolution of the optical system may seem experimentally unattainable when using this resolution criterion. (b) Along both the small pixel region and the hybrid region, the $r_\mathrm {heur}$ and the $r_\mathrm {MTF}$ formulae match accurately the experimental results. This confirms the utility of these formulae along a large range of experimental situations. Finally, in the large pixel region the $r_\mathrm {Nyq}$ formula successfully predicts the lateral resolution of the optical system. In such a case, if the pixel size is larger than the diffraction PSF, then the system behaviour can no longer be considered LSI. As a consequence, the MTF model and, therefore, the heuristic model are no longer valid. Another fact is that, at this region, aliasing effects could be appreciated if the sampling rate is less frequent than the Nyquist condition. Obviously, in this region the effects of sampling dominate over the diffraction ones. Having this in mind, we have calculated along the small pixel and the hybrid regions, excluding the last four data of Fig. 7, the relative error between the experimental data and those calculated with the $r_\mathrm {heur}$ formula, which average value results to be $10\%$. Thus, the general proposed criterion for estimating the lateral resolution of the opto-digital system, $r_\mathrm {heur}$, holds for cases in which sampling is not far below the Nyquist criterion. In that extreme case, Nyquist’s formula still provides the best agreement with the experimental value.
Finally, as the image of an USAF test target is most commonly used for measuring the lateral resolution over the slanted edge method, the last experiment was repeated using a high resolution USAF–1951 chart (Product 58–198 Edmund Optics) (see Fig. 8). The results are shown in Fig. 9, and follow the same behaviour in terms of lateral resolution as the measurements obtained with the slanted edge.
6. Conclusion
Classical formulas commonly used to predict the spatial resolution of opto-digital imaging systems often overestimate their resolving power. This can lead to frustration for researchers or system designers, as the expected resolution seems to be unachievable. To provide a more realistic value of opto-digital imaging systems resolution limit, a simple formula is proposed. This formula, named here as heuristic formula, is theoretically derived and verified via experiments. The theoretical derivation considers the LSI nature of opto-digital systems, which gives similar importance to three factors (diffraction PSF, pixel size and sampling) that act in cascade on the optical signal. Under these conditions, the heuristic formula is capable to accurately predict, through a simple arithmetic operation, the resolution limit of opto-digital imaging systems. The predicted values were verified by means of an experiment in which, by adjusting the diameter of an iris aperture stop, a wide range of relative sizes of the diffraction spot and the pixel were considered. Furthermore, two approaches for measuring the resolution limit of the system were carried out: firstly, by using a slanted edge and measuring the MTF and, secondly, the standard method based on imaging a USAF test target.
Funding
Universidad Nacional de Colombia (Hermes 347 grants (53712,50069,49570)); Generalitat Valenciana (PROMETEO/2019/048); European Regional Development Fund (RTI2018-099041-B-I00); Ministerio de Ciencia, Innovación y Universidades (RTI2018-099041-B-I00).
Disclosures
The authors declare no conflicts of interest.
Data Availability
The data presented in this study are contained within the article.
References
1. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005), 3rd ed.
2. R. H. Vollmerhausen and R. G. Driggers, Analysis of Sampled Image Systems, SPIE Press (SPIE Press, Bellingham, 1998).
3. E. Abbe, “The Relation of Aperture and Power in the Microscope (continued),” J. R. Microsc. Soc. 2(4), 460–473 (1882). [CrossRef]
4. L. Rayleigh, “On the theory of optical images, with special references to the microscope,” Philos. Mag. Ser. 5 42(255), 167–195 (1896). [CrossRef]
5. H. Nyquist, “Certain topics in telegraph transmission theory,” Trans. Am. Inst. Electr. Eng. 47(2), 617–644 (1928). [CrossRef]
6. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]
7. S.-H. Lee, J. Y. Shin, A. Lee, and C. Bustamante, “Counting single photoactivatable fluorescent molecules by photoactivated localization microscopy (palm),” Proc. Natl. Acad. Sci. 109(43), 17436–17441 (2012). [CrossRef]
8. M. Rust and B. M. X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]
9. L. Galdón, G. Saavedra, J. Garcia-Sucerquia, M. Martínez-Corral, and E. Sánchez-Ortiga, “Fourier lightfield microscopy: a practical design guide,” Appl. Opt. 61(10), 2558–2564 (2022). [CrossRef]
10. L. Galdon, H. Yun, G. Saavedra, J. Garcia-Sucerquia, J. C. Barreiro, M. Martinez-Corral, and E. Sanchez-Ortiga, “Handheld and Cost-Effective Fourier Lightfield Microscope,” Sensors 22(4), 1459 (2022). [CrossRef]
11. G. Scrofani, J. Sola-Pikabea, A. Llavador, E. Sanchez-Ortiga, J. Barreiro, G. Saavedra, J. Garcia-Sucerquia, and M. Martínez-Corral, “FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples,” Biomed. Opt. Express 9(1), 335–346 (2018). [CrossRef]
12. R. H. Webb, “Confocal optical microscopy,” Rep. Prog. Phys. 59(3), 427–471 (1996). [CrossRef]
13. C. J. R. Sheppard and C. J. Cogswell, “Three-dimensional image formation in confocal microscopy,” J. Microsc. 159(2), 179–194 (1990). [CrossRef]
14. G. Cox and C. J. Sheppard, “Practical limits of resolution in confocal and non-linear microscopy,” Microsc. Res. Tech. 63(1), 18–22 (2004). [CrossRef]
15. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]
16. J. B. Pawley, Handbook of Biological Confocal Microscopy (Springer, New York, NY, 1995).
17. G. D. Boreman, Modulation Transfer Function in Optical and Electro-optical Systems (SPIE Press Bellingham, Washington, 2001).
18. B. Javidi, A. Markman, and S. Rawat, “Automatic multicell identification using a compact lensless single and double random phase encoding system,” Appl. Opt. 57(7), B190–B196 (2018). [CrossRef]
19. P. M. Douglass, T. O’Connor, and B. Javidi, “Automated sickle cell disease identification in human red blood cells using a lensless single random phase encoding biosensor and convolutional neural networks,” Opt. Express 30(20), 35965–35977 (2022). [CrossRef]
20. T. O’Connor, C. Hawxhurst, L. M. Shor, and B. Javidi, “Red blood cell classification in lensless single random phase encoding using convolutional neural networks,” Opt. Express 28(22), 33504–33515 (2020). [CrossRef]
21. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics (Wiley, New York, 1978).
22. S. K. Park, R. Schowengerdt, and M.-A. Kaczynski, “Modulation-transfer-function analysis for sampled image systems,” Appl. Opt. 23(15), 2572–2582 (1984). [CrossRef]
23. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, 2005), 7th ed.
24. M. Estribeau and P. Magnan, “Fast MTF measurement of CMOS imagers using ISO 12333 slanted-edge methodology,” Proc. SPIE 5251, 243–252 (2004). [CrossRef]
25. ISO, “Photography: Electronic still picture imaging – Resolution and spatial frequency responses,” ISO 12233:2017, International Organization for Standardization, Geneva, CH (2017).
26. E. Buhr, S. Günther-Kohfahl, and U. Neitzel, “Accuracy of a simple method for deriving the presampled modulation transfer function of a digital radiographic system from an edge image,” Med. Phys. 30(9), 2323–2331 (2003). [CrossRef]