## Abstract

The Lagrange invariant is a well-known law for optical imaging systems formulated in the frame of ray optics. In this study, we reformulate this law in terms of wave optics and relate it to the resolution limits of various imaging systems. Furthermore, this modified Lagrange invariant is generalized for imaging along the z axis, resulting with the axial Lagrange invariant which can be used to analyze the axial resolution of various imaging systems. To demonstrate the effectiveness of the theory, analysis of the lateral and the axial imaging resolutions is provided for Fresnel incoherent correlation holography (FINCH) systems.

© 2014 Optical Society of America

## 1. Introduction

The Lagrange invariant, or theorem, also known in some textbooks [1] as the Smith-Helmholz formula, is a fundamental law of geometrical optics. In words, this law states that in any optical imaging system comprising a lens, the product of the object/image size with the marginal ray angle and with the index of refraction in one side of the lens is equal to the product of these three quantities on the other side of the lens. In reference to Fig. 1, the Lagrange invariant formula is [1]

$noyotanθo=niyitanθi,$
where no and ni are the indices of refraction, yo and yi are the object and image sizes, respectively, and θo and θi are the marginal ray angles. In general, the Lagrange invariant is valid for many types of imaging systems, rather than just a single lens system. It is actually valid for imaging systems comprising a series of arbitrary number of lenses or spherical mirrors, coherent holographic systems, pinhole cameras [2] and many other imaging systems. In this study, we refer to all these systems, for which the Lagrange invariant is valid, as classical imaging systems.

Fig. 1 Schematics of a single lens imaging system.

As much as the Lagrange invariant is fundamental and general, apparently not all imaging systems obey this law. About four years after the Fresnel incoherent correlation holography (FINCH) had been proposed [3], Bouchal et al. were the first to point out that FINCH violates the Lagrange invariant [4]. Recently, Lai et al. published a paper entirely devoted to the topic of this violation by FINCH [5]. Kelner et al. noted that another holographic system, coined Fourier incoherent single channel holography (FISCH), also violates the Lagrange invariant [6]. Following [6], it is now known that two operation principles are common for the systems violating the Lagrange invariant. First, they are incoherent holographic systems. In other words, they record holograms of objects that radiate, or are illuminated by, spatial incoherent light. Second, in the hologram acquisition process, the wave from each object point is split into two waves which are mutually interfered on the hologram plane. Under these two conditions, several different holographic configurations belong to the family of systems violating the Lagrange invariant [3, 618]. We denote this group of systems, according to their common properties, as the family of self-informative-reference holographic (SIRH) systems. By this classification, it is emphasized that both the self-reference wave and the signal wave, created from each object point, contain the information of the point’s three-dimensional location. This feature is responsible for the violation of the Lagrange invariant by SIRH systems, as shown below. However, not all incoherent holographic recorders have self-reference informative waves. For example, in the system proposed in [19], all the information regarding the object location is removed from the reference wave. Consequently, the system of [19] obeys the Lagrange invariant, in a similar way to most other coherent holographic recorders, operating with non-informative reference beams. Other well-known incoherent holographic recorders, the optical scanning holography system [20] and the multiple-viewpoint-projection holography systems [21], also obey the Lagrange invariant, simply because a reference beam is not involved at all in their recording process. Note that although most of the coherent holographic systems obey the Lagrange invariant, the confocal FINCH proposed in [18] can operate under both coherent and incoherent illumination, and for any type of illumination, the confocal FINCH violates the Lagrange invariant. Thus the confocal FINCH can be considered as the first (to the best of our knowledge) coherent system which violates the Lagrange invariant. Beyond the interesting feature of violating a fundamental scientific law, there is a practical aspect for this violation, because, as shown in the following, there is a direct relation between the Lagrange invariant and the property of imaging resolution.

In this study, we first formulate the Lagrange invariant in a different manner that is more suitable for wave optics, rather than for geometrical optics. Holography, in general, is analyzed in the frame of wave optics and image resolution is derived in the frame of Fourier optics [22], which is also a subfield of wave optics. Therefore, the new formalism clarifies the exact relation between the resolution limits of a given system and the Lagrange invariant in its modified version. Following the treatment of the transverse resolution, we generalize the Lagrange invariant toward imaging along the depth dimension by defining the law of the axial Lagrange invariant. This new definition enables one to evaluate the axial resolution limit of SIRH systems in general and FINCH in particular.

## 2. Modified transverse Lagrange invariant

#### 2.1. Classical imaging systems

Based on linear system theory, it is well-known that the output image of a single lens imaging system (Fig. 1) is obtained by convolving the object function with the point spread function (PSF) of the system [22]. The PSF of the system is given as a scaled Fourier transform of the lens pupil function, usually considered as a disk shaped aperture. For a pupil aperture of radius R, the width Wi of the PSF in the image plane is proportional to λzi/(niR), where λ is the central wavelength of light in vacuum and zi is the distance between the lens and the image plane. In the object plane, the width Wo of the PSF is proportional to λzo/(noR), where zo is the distance between the object plane and the lens. Wo and Wi are considered as the minimal spot sizes that the system can image on the object plane and on the image plane, respectively.

Considering the system shown in Fig. 1, the marginal ray angles satisfy the relations tanθo = R/zo and tanθi = R/zi. Therefore, Eq. (1) can be rewritten as follows,

$noyoRλzo=niyiRλzi.$
Using the definition of the image-object transverse magnification MT = yi/yo, and the definition of the spot transverse magnification MW = Wi/Wo, Eq. (2) can be reformulated as follows,
$MTMW=1.$
Equation (3) is the modified representation of the Lagrange invariant formulated on the basis of wave theory. It simply states that for any object of two spots, the spots and the gap between the spots are magnified equally. This rule is very intuitive and commonly accepted when classical systems are regarded. However, as shown in the following, this fundamental law can be broken in any system belonging to the class of SIRH systems.

How is this formalism related to image resolution? According to the well-known Rayleigh resolution criterion [1,22], the transverse resolution is actually the ability to separate two nearby spots on the image plane, each of which is an image of a corresponding object point. Therefore, for a spot separation of yi and a spot size of Wi the transverse resolution is proportional to the ratio yi/Wi; by increasing this ratio the transverse resolution is improved. For a given point separation yo and a spot size Wo, on the object plane, the transverse resolution is dependent on the ratio between the transverse and the spot-size magnifications MT/MW. In other words, for any investigated imaging system with the same physical parameters as those of the classical system of Fig. 1 (i.e., same no,i, zo and R), whenever the Lagrange invariant, formulated in Eq. (3), is violated, the imaging resolution of the two systems is inherently different. Explicitly, when the ratio MT/MW of the investigated system is greater than 1, the separation between the two imaged spots is magnified more than their widths. Consequently, the transverse resolution is improved in comparison to the classical system, since the gap between the spots is increased more than their widths. Therefore, it is easier to resolve these spots.

The reference system shown in Fig. 1 can resolve details no smaller than the size of 0.82λ/(nosinθo) and 0.61λ/(nosinθo) for coherent and incoherent illumination, respectively [1]. Apparently, there are four well-known ways to improve the transverse resolution of an imaging system: 1. Reduce the wavelength λ of the illumination [23]. 2. Increase the aperture radius R of the system [24]. 3. Reduce the distance zo between the object and the system aperture. 4. Increase the index of refraction no in the object space [23]. In all these ways, the Lagrange invariant of Eq. (3) is valid. Moreover, the resolution improvement can be determined from Eq. (3) once the ratio between the gap of the spots and their width is extracted from this equation, as follows:

$yiWi=yoWo∝yonoRλzo.$
For a constant object gap yo, by increasing the parameters no and/or R, and/or by decreasing the parameters λ and/or zo, the ratio $yi/Wi$is increased and thus the transverse resolution is also increased. In other words, the resolution improvement in these four ways is achieved by narrowing the width of the object spot that the system can image.

As mentioned above, a different approach to improve image resolution is to look for systems that violate the modified Lagrange invariant of Eq. (3) in a way that satisfies the condition

$|MTMW|>1,$
which implies that the ratio between the gap and the width of any two imaged points is higher than that of any comparable classical system. Therefore, the resolving capability of this system should be better than any classical system with the same physical parameters (i.e., same no,i, zo and R). As previously mentioned, a system that can indeed violate the Lagrange invariant, in the sense of Eq. (5), is the FINCH system which is analyzed next.

#### 2.2. Fresnel incoherent correlation holography

The FINCH system, in its various configurations, was analyzed in several papers. Here, we only briefly mention its main principles of operation. A schematic configuration of a dual lens FINCH system [2528] is shown in Fig. 2. It is referred to as a dual lens FINCH system since, as is described in the following, for every object there are two images created by two different effective lenses. The observed object is spatially incoherent, thus light beams that are emitted or scattered from two different object points cannot interfere with each other. Hence, the system can be analyzed by considering a single point source object. A spherical light beam is emitted from a point source located at a working distance zo from the objective lens Lo, and propagates into the FINCH system. An input polarizer, P1, is set at a 45° angle to the active axis of a spatial light modulator (SLM), which allows the formation of two in-parallel imaging systems in a common-path single-channel configuration. The SLM acts as a spherical lens, but only for the polarization components of the beam that are in parallel to its active axis. Polarization components of the input beam that are perpendicular to its active axis are not modulated; for them, the SLM is a transparent element. The system can thus be considered as two imaging systems, each acting with one of two orthogonal polarization components of light. In these systems, the input beam of light is collected by the objective lens Lo and then further concentrated to the image points beyond the SLM. In one of the two systems the SLM does not influence the beam, and an image is formed at the image point a2, a distance f2 from the SLM; in the other, a converging diffractive lens is displayed on the SLM and the beam is concentrated into the image point a1 a distance f1 from the SLM.

Fig. 2 Schematics of FINCH: (a) Recording system. (b) Reconstruction system.

To record a hologram of maximum achievable resolution [26], a charge-coupled device (CCD) is positioned between the two image points, a1 and a2, so that a perfect overlap is achieved between the beam diverging from the image point a1, and the beam converging toward the image point a2. The working distance zo is defined as the distance which yields this perfect beam overlap on the CCD plane. Note that interference can occur between these two beams since they originate from the same point source, granted that the maximal optical path difference (OPD) between the two is shorter than the coherence distance of the light source [27]. The output polarizer, P2, is used to project the polarization components of the two beams onto a common orientation. Usually, P2 is also set at a 45° angle to the active axis of the SLM, but other angles can be used to control the relative intensity of the two beams [25]. The intensity of the two-beam interference pattern is recorded by the CCD, giving rise to a 0th order term, and two other terms attributed to the holographic image of the object point and its twin. A phase-shifting procedure [2528], utilizing the SLM, requires at least three-exposures and is performed so that only the holographic image term remains. The spatial incoherence of the object assures that the final recorded FINCH hologram is a summation over the intensities of all point source interference patterns. The recorded object can then be reconstructed from the hologram through a digital Fresnel propagation to a specific reconstruction distance, $zr$ [2528].

In order to obtain the ratio between the image and the spot magnifications for FINCH and to prove the violation of Lagrange invariant, we use the results obtained in Appendix A, where it is assumed, for simplicity, that the index of refraction is equal to 1 everywhere. We assume that there is a single object point at $(xs,0,−zs)$ imaged by FINCH into two points, one is formed before the CCD at $(−xsz1/zs,0,z1)$ and the other is formed beyond the CCD at$(−xsz2/zs,0,z2)$. Figure 2 is relevant for the analysis, but to keep the description general, the beam overlap condition is not necessarily assumed, and therefore the system parameters zo, f1 and f2 in the figure are replaced in the analysis by the arbitrary distances zs, z1 and z2, respectively. Each of the above mentioned image points emits a spherical wave, and these two waves interfere on the CCD.

The various magnification values of FINCH are denoted herein with an upper line to distinguish them from the values of the classical systems. From the result of Eq. (26), the transverse magnification of FINCH is

$|M¯T|=zhzs,$
where zh is the distance between the SLM and the CCD. Based on Eq. (27), the reconstruction distance zr, between the reconstructed image and the hologram, is
$zr=(zh−z1)(z2−zh)z2−z1.$
When the two image points satisfy the condition of perfect overlap between the two beams on the CCD plane, the distances z1,2 are equal to the parameters f1,2 which satisfy the relation
$zh=2f1f2f1+f2.$
Note that the result of Eq. (8) can be easily obtained from Fig. 2, based on the rules of similar triangles.

To calculate the radius of the hologram for the general case, two situations should be distinguished. In both cases a1 and a2 are at different sides of the CCD, but while in one the distances satisfy the condition $zh≤2z1z2/(z1+z2)$, in the other the distances satisfy the condition $zh>2z1z2/(z1+z2)$. In the first case, the beam that diverges from a1 dictates the radius of the hologram. In the second case, the radius of the hologram is determined by the wave that converges to a2. Based on the geometry of Fig. 2, the radius of the hologram RH of a point object for the above mentioned cases is

$RH=|R(zk−zh)zk|, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2).$
The marginal angle in the image reconstruction side satisfies the relation $tanθr=RH/zr$and therefore using Eqs. (7) and (9) yields,
$tanθr=RHzr=|R(zk−zh)/zk(zh−z1)(z2−zh)/(z2−z1)|=|R(z2−z1)zk(z3−k−zh)|, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2).$
Knowing that the marginal angle in the object side satisfies the relation $tanθo=R/zs$ and using Eq. (10), enable one to calculate the spot magnification as follows,
$|M¯W|=|WiWo|=|tanθotanθr|=|R/zsR(z2−z1)/zk(z3−k−zh)|=|zk(z3−k−zh)zs(z2−z1)|, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2).$
Finally, the results of Eqs. (6) and (11) yield the ratio between the lateral and the spot size magnifications for FINCH as follows,
$|M¯TM¯W|=|zh/zszk(z3−k−zh)/zs(z2−z1)|=|zh(z2−z1)zk(z3−k−zh)|, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2).$
Equation (12) is the modified Lagrange formula of FINCH for all cases for which the two image points are distributed on both sides of the CCD. There are three special positions for the two image points that should be noted. First, the two image points satisfy the condition of perfect overlap between the two beam cones on the CCD plane, such that Eq. (8) is satisfied. Substituting Eq. (8) and z1,2,k = f1,2,k into Eq. (12) yields,

$|M¯TM¯W|=2.$

Evidently, the Lagrange invariant is violated in Eq. (13), and the separation between the two imaged spots is magnified twice as much as the width of each spot. As a result, an improvement of the transverse resolution by a factor of 2 is achieved with FINCH in comparison to classical coherent imaging systems and by a factor of about 1.5 in comparison to classical incoherent imaging systems, as is extensively discussed in [26]. It should be emphasized that this superiority of FINCH does not violate the well-known Abbe resolution limit [1], because FINCH is an incoherent imager in which the spatial bandwidth is twice wider than that of coherent systems [22]. The resolution improvement by a factor of about 1.5 of FINCH in comparison to the classical incoherent imagers is not achieved by widening the bandwidth, but by obtaining a transfer function for FINCH which is more uniform than the cone-like shape of the incoherent transfer function [26]. Therefore, the superiority by a factor of 2 over coherent systems, and by a factor of 1.5 over incoherent systems, is well established in the frame of the Abbe’s theory.

Back to Eq. (12), it can be shown that shifting the object point toward the objective, up to the point where the image a1 is located on the CCD, converts FINCH into a classical imaging system, in the sense that instead of a hologram the CCD records the image of an object directly. Indeed, substituting the parameters z1 = zh and k = 1 into Eq. (12) yields the result$|M¯T/M¯W|=1$, which indicates that FINCH, in this extreme case, obeys the Lagrange invariant like any other classical incoherent imaging system. The opposite extreme case, where the object point is shifted far from the objective, up to the point where the image a2 is located on the CCD, yields again the result of a classical imaging system, $|M¯T/M¯W|=1$, whereas the parameters z2 = zh and k = 2 are substituted into Eq. (12) .

In order to investigate the lateral magnification ratio along zs inside the region of interest and to see its sensitivity to the various system parameters, one can make use of the imaging condition of a spherical lens to get the following equation

$zk=zozsfkzsfk+zozs−zofk, k=1,2$
Substituting Eqs. (8) and (14) and the relation zs = αzo into Eq. (12), yields,
$|M¯TM¯W|=2αzo(f2−f1)2f2f1|1−α|+αzo(f2−f1).$
It is evident from Eq. (15) that in the working plane, where α = 1, $|M¯T/M¯W|=2$. At the ends of the region of interest (z1 = zh or z2 = zh), it can be easily shown, based on the imaging equations of a lens and Eq. (8), that the parameter α satisfies the condition
$|α−1|α=zo(f2−f1)2f2f1.$
Substituting Eq. (16) into Eq. (15) yields the result $|M¯T/M¯W|=1$, which was already mentioned above. However, the expression of Eq. (15) can provide more information about the lateral magnification ratio than Eq. (12). Explicitly, one can see that the lateral magnification ratio goes down from 2 to 1 in a steeper slope, as much as the value of the working distance zo, or the difference between the two focal distances (f2-f1), are smaller.

The Lagrange invariant is violated by FINCH (in the sense that$|M¯T/M¯W|>1$), between the two extreme cases, z1 = zh and z2 = zh, only because both the reference and the signal beams, radiated from every object point, contain the information on the lateral position of this point source. The lateral location of an object point is encoded into the linear phase of a wave. Thus, both beams emitted from any object point contain the same linear phase with parameters relative to the xy location of the point source. These two linear phases of the two beams are summed constructively in the wave interference event to a maximal value, only if the overlap condition between the beam cones is fulfilled. Moreover, if the linear phases are constructively summed, the result is a linear phase with parameters multiplied by a factor of 2 relatively to the original linear phases. In the reconstruction process, this resulting linear phase is decoded back to a lateral image magnification increased by a factor of 2 relatively to that of a classical imager. On the other hand, the magnification of the spot width is not affected by the additional information carried by the reference beam, because the width of the image spot is determined only by the size of the overlap between the two beams. The linear phases do not affect the size of this overlap. This observation can be easily demonstrated for the special case of f2→∞, which is the case where one beam is a plane wave and the other is a spherical beam converging to the point located at a distance of f1 from the SLM. In this case, the cone overlap condition is achieved if f1 = zh/2, and thus zr = f1 and RH = R. The spot size of the reconstructed image spot is relative to λzr/(niRH) = λzh/(2niR) = λf1/(niR) which is the same spot size of the image obtained by a classical imaging system comprising two lenses with object and image distances of zo and f1, respectively. However, according to Eq. (6), the lateral magnification of this FINCH configuration is zh/zo = 2f1/zo, which is as twice as the magnification of the corresponding classical imaging system. In other words, the location information carried on by the two beams magnifies the gap between the spots as much as twice more than the spots’ sizes are magnified.

Based on the above discussion, the conclusion is that if both image spots a1,2 are obtained at one side of the CCD, in front or beyond of it, the linear phases are summed destructively. Thus, the Lagrange invariant is violated, but in the sense that $|M¯T/M¯W|<1,$and therefore the lateral resolution of this kind of FINCH is expected to be worse than that of a classical imaging system. Furthermore, one can realize that if the image spot a1 is located in a different side of the CCD than a2 is, but the condition of the perfect overlap of the beam cones is not satisfied, then the Lagrange invariant is violated in the form of $1<|M¯T/M¯W|<2,$as is indeed reflected from Eq. (15). In that case, the lateral resolution of FINCH is better than that of a classical imaging system but is not optimal.

## 3. Modified axial Lagrange invariant

#### 3.1. Classical imaging systems

Formulating the Lagrange invariant in terms of magnification ratio enables to generalize it to the regime of axial imaging. Moreover, from this generalization, the axial resolution of imaging systems, in general, and of FINCH, in particular, can be derived in a similar fashion as has been done with the lateral resolution. Knowing the axial Lagrange formulas for both classical and FINCH systems, and axial resolution of the classical system, enables one to estimate the axial resolution of FINCH.

Consider a single object point imaged by the single lens of Fig. 1. The length of its image spot along the z axis, Δi, is relative to $λzi2/(niR2).$The minimal axial length Δo that the system can image is relative to$λzo2/(noR2)$ [1]. Therefore, the axial magnification of the spot length is

$MΔ=ΔiΔo=nozi2nizo2=ninoMW2=ninoMT2,$
where the last equality is obtained from the lateral Lagrange invariant [Eq. (3)]. The axial image magnification of classical imagers is given in [22] as follows:
$MA=|dzidzo|=ninoMT2.$
Therefore, axial Lagrange invariant can now be formulated as
$MAMΔ=1.$
As in the case of the transverse Lagrange invariant, for any classical imaging system, the axial image magnification and the spot length magnification are always equal. Next, we derive the axial Lagrange formula for FINCH.

#### 3.2 Fresnel incoherent correlation holography

To check whether a SIRH system, in general, or a FINCH system, in particular, violates the axial Lagrange invariant [Eq. (19)], one needs to calculate the values of $M¯A$ and $M¯Δ$ for those systems. In the present study, we only calculate the axial magnifications of the FINCH configuration shown in Fig. 2, whereas the detailed calculations of $M¯A$ and $M¯Δ$ are given in Appendixes B and C, respectively. Once the ratio $M¯A/M¯Δ$ is known, one can compare the axial resolution of FINCH to those of classical systems. When, and only if, this ratio is greater than 1, FINCH can axially resolve better than classical systems, in a similar way to the analysis of the lateral Lagrange invariant, described in Section 2.

The axial image magnification of FINCH is calculated in Appendix B, and the result given by Eq. (36) is

$M¯A=|dzrdzs|=|zh3(f1+f2)zo3(f2−f1)(1−α)α3|.$
Next, the axial spot magnification should be calculated in order to find out the ratio between the axial magnifications. The axial spot-length magnification is calculated in Appendix C, and according to Eq. (42), is equal to,
$M¯Δ=|ΔiΔo|=(zh[αzo(f2−f1)+2f2f1|α−1|]2α2zo2(f2−f1))2.$
Dividing Eq. (20) by Eq. (21) yields the axial Lagrange formula as follows,
$|M¯AM¯Δ|=|8f1f2(1−α)αzo(f2−f1)[αzo(f2−f1)+2f1f2|1−α|]2|.$
The right-hand side of Eq. (22) is not given as a simple number but instead there is a relatively complicated expression depending on the specific parameters of each investigated FINCH system. Moreover, unlike the Lagrange formulas of classical systems, here the magnification ratio is dependent on the axial location of the observed spots through the parameter α, indicating that the axial resolution of FINCH is not constant along the z axis. In the region of interest, where all the object points are close to the working plane, and thus α is close to 1, the Lagrange formula of Eq. (22) can be approximated to,
$|M¯AM¯Δ|≅|8f1f2(1−α)zo(f2−f1)|.$
This axial magnification ratio is much smaller than 1 for α→1, but can grow steeply if the working distance zo and the difference between f1 and f2 are small. In any case, the conclusion from Eq. (23) is that close to the working plane the axial magnification ratio is always smaller than 1. Consequently, it can be realized that the axial resolution near the working plane is always worse than that of classical imagers. Thus, one can argue that the advantage of FINCH in the lateral resolution is mitigated by the disadvantage in the axial resolution.

Back to Eq. (22), in the two ends of the range of interest, where the image point a1 is on the CCD plane (i.e., with z1 = zh), or the other image point a2 is on the CCD plane (i.e., with z2 = zh), the parameter α (which equals zs/zo) satisfies the condition of Eq. (16). Substituting Eq. (16) into Eq. (22) yields the result$|M¯A/M¯Δ|=1$ for both ends of the region of interest. This result is expected since, as mentioned above, in the ends of the region, the recorded holograms are degenerated to images of objects captured by a classical incoherent imager of two lenses. In the range between the working plane and the two ends of the region, the axial magnification ratio is never above the value 1. This can be shown by looking for the maximal value of the axial magnification ratio given by Eq. (22), and showing that the maximums are obtained outside the region of interest. To summarize this point, the axial magnification ratio in the region of interest starts from 0, at the working plane, and grows to 1 at the two edges of the region. Therefore, the axial resolution of FINCH is improved as the object moves out of the working plane but the FINCH axial resolution is always worse than that of a classical system.

## 4. Numerical investigation and experimental results

#### 4.1. Numerical investigation

The lateral and axial magnification ratios of FINCH were numerically simulated for various system parameters. In the presented results, the lateral magnification ratios were calculated as the ratio$|M¯T/M¯W|$, where $M¯T$ was obtained from Eq. (26). $M¯W$ was calculated by the ratio (zrR)/(zsRH), where zr and RH were obtained from Eq. (34) and Eq. (9), respectively.

In the first numerical experiment, the working distance was kept constant on zo = 30cm and the distance between the SLM and CCD was fixed on zh = 90cm. The magnification ratios were checked for four values of f2 = 100, 140, 260 and 108cm (the latter being effectively equivalent to infinity), where f1 was dictated by the values of f2, via the overlap condition of Eq. (8), to be f1 = 81.82, 66.32, 54.42 and 45cm. Figure 3(a) shows the lateral magnification ratio as a function of zs, for different values of f1,2, in the region where one image point is before and the other is beyond the CCD. The main observations discussed in relation to Eq. (15) are explicitly demonstrated in Fig. 3(a); the lateral magnification ratio is decreased from 2 in the working distance to 1 in both sides of the region, and the decrease is steeper as much as the difference f2-f1 is smaller.

Fig. 3 Lateral magnification ratio versus the axial object location for: (a) Four values of f1,2. (b) Four values of working distance zo with constant values of f2 = 140cm and f1 = 66.32cm.

Figure 3(b) shows the same lateral magnification ratio as a function of zs, but for different values of zo = 30,15,10,7.5cm, and constant values of f2 = 140cm, f1 = 66.32cm and zh = 90cm. Here again the lateral magnification ratio is decreased from 2 to 1, and the decrease is steeper as much as zo is smaller.

The axial magnification ratio as a function of zs is depicted in Fig. 4, first in Fig. 4(a), for different values of f1,2, and second in Fig. (b), for different values of zo. The main observations discussed in relation to Eq. (22) are explicitly demonstrated in Fig. 4; the axial magnification ratio is increased from 0 in the working distance to 1 in both sides of the region, and the growth is steeper as much as the difference f2-f1 or zo are smaller.

Fig. 4 Axial magnification ratio versus the axial object location for: (a) Four values of f1,2. (b) Four values of working distance zo with constant values of f2 = 140cm and f1 = 66.32cm.

In the simulation of Fig. 4 the axial magnification ratios were calculated as the ratio$|M¯A/M¯Δ|$, where $M¯A$was obtained directly from the derivative dzr/dzs. zr was calculated from Eq. (34) and $M¯Δ$ was calculated by the ratio (zrR/zsRH)2, where zr and RH were obtained as before [i.e., from Eq. (34) and Eq. (9), respectively].

#### 4.2. Experimental results

In order to demonstrate the main effects discussed in this study, comparative experiments of FINCH versus incoherent two-lens imaging system were carried out, where both systems have the same numerical aperture. In both experiments, the input object contained two resolutions charts (RCs), one was constant at the working plane 30cm from the objective lens and the other RC was shifted back and forth along the z axis. A beam splitter was used as a beam combiner, with the two resolutions charts (RC1, constant negative NBS 1963A; RC2, moving 1951 USAF). The resolution charts were back-illuminated using two LEDs (Thorlabs LED635L, 170mW, λ = 632.8nm, Δλ = 15nm filtered to Δλ = 3nm using a band pass filter). The focal lengths of the objective lens $Lo$, the working distance, and the SLM-CCD distance were chosen as fo = 25cm, zo = 30cm and zh = 90cm, respectively. Other parameters in the system were: f2 = 140cm, which according to Eq. (8), imposes the value f1≈66.3cm. The two polarizers, $P1$ and $P2$, were both set at a 45° angle to the SLM (Holoeye PLUTO, 1920 × 1080 pixels, 8μm pixel pitch, phase only modulation).

In the experiments, we compared the results of a conventional two-lens imaging system and of the FINCH system. Along the experiments, RC1 was fixed on the working plane, a distance zo = 30cm from the objective lens. RC2 was shifted from the point zs = 27cm to the working plane (depicted in Fig. 5), and later from the working plane to the point zs = 33cm (depicted in Fig. 6). In these two figures, the left-hand column contains images of a classical two-lens imager. The central and the right-hand columns contain the reconstructed images from FINCH. The central column presents reconstructions at the same zr which is the best in-focus distance for RC1. The right-hand column shows reconstructions at different reconstruction distances, whereas in each line zr is chosen such that the image of RC2 is in focus.

Fig. 5 Experimental results with RC1 (NBS 1963A) located at a fixed location of 30cm away from the objective lens and RC2 (1951 USAF) located at various zs locations of (a) 27cm to (g) 30cm. Left-hand column: two-lens imager with RC1 plane in focus; central column: FINCH reconstruction of RC1 plane of best focus; right-hand column: FINCH reconstruction of RC2 plane of best focus.

Fig. 6 Experimental results with RC1 (NBS 1963A) located at a fixed location of 30cm away from the objective lens and RC2 (1951 USAF) located at various zs locations of (a) 30cm to (g) 33cm. Left-hand column: two-lens imager with RC1 plane in focus; central column: FINCH reconstruction of RC1 plane of best focus; right-hand column: FINCH reconstruction of RC2 plane of best focus.

Figures 5 and 6 show the main effects discussed so far in this study. First, when all the objects are located at the working plane, a distance zo from the objective lens, the lateral resolution of FINCH is better than that of the two-lens imager. This observation is reflected from Figs. 5(g) and 6(a), where both RC’s are at the working plane and all the images are in focus, but the small details are better revealed in the holographic reconstructions than in the image from the two-lens imager. This advantage in the lateral resolution is eroded as RC2 is moved toward the edges of the region, in the sense that the lateral resolution of RC2 at the right-hand column is gradually decreased from Fig. 5(g) to Fig. 5(a) and from Fig. 6(a) to Fig. 6(g). Regarding the axial resolution, the superiority of the classical imager is reflected from Figs. 5 and 6 by the fast disappearance of the out-of-focus image in the two-lens imager (left-hand column), in contrast to the relatively high intensity of the out-of-focus images, seen in both central and right-hand columns, along all the movement of RC2.

## 5. Conclusions

It is well-known by now based on [6,18,2628] that some SIRH systems, such as FINCH, FISCH and the confocal FINCH, can resolve laterally better than conventional imaging systems having the same numerical aperture and illuminated by the same central wavelength, granted that specific constraints are fulfilled. By the present study, it is now better understood that the reason for this superiority in the lateral resolution arises because FINCH, and similar systems, break a general fundamental law valid for many other classical imaging systems. Under certain circumstances, and in contrast to other classical systems, FINCH can magnify the gap between spots more than it magnifies the spots themselves.

In order to find out the resolution limits of a classical imaging system, it is enough to calculate, or to measure, the sizes of its PSF. This is since classical imaging systems obey the Lagrange invariant. This means that in classical systems the lateral gap between spots and their width are magnified equally, and the axial gap between spots is magnified by the same value as the lengths of the spots are magnified. Thus, knowing the size of the PSF gives the minimal gap needed between the spots in order to resolve them. However, this is not the case for systems violating the Lagrange invariant like SIRH in general, and FINCH in particular. For these systems, the magnifications of the spot size and of the gap between the spots should be measured, or calculated, separately, because these two magnifications are no longer necessarily the same, like in the classical systems.

In this study, we have theoretically shown that FINCH can resolve laterally better than classical coherent system by a factor that is changed from 2 for objects at the working plane down to 1 at the edge of the working range. Relatively to incoherent classical imagers, FINCH is better by a factor of 1.5 at the working plane, but becomes worse by a factor of 1.35 (≈0.82/0.61) at the edges of the working region.

When the axial resolution is considered, the advantage of FINCH over the classical imagers vanishes completely. As the axial magnification ratio of FINCH is changed from 0 at the working plane to 1 at the edges of the working region, it can be stated that classical imagers always axially resolve better than FINCH, but their superiority is decreased as the objects points move toward the end of the working range. How much is FINCH worse? Relatively to coherent imagers, it can be estimated that the axial resolution limit of FINCH is smaller by a factor of $|M¯A/M¯Δ|$. The value of this factor is dependent on the axial location of the object points and can be valued from graphs similar to Fig. 4. Note that at zs = zo the value of $|M¯A/M¯Δ|$ is zero, which fits well with the observation that the function zr(zs) has its extremum at zs = zo. This observation also relates to the phenomenon that any two points, in which the working plane is located exactly at the midpoint between them, are reconstructed at approximately the same location as is reflected from Eq. (35) and [29]. To overcome this inferiority of FINCH and to accomplish a FINCH imager with superiority in both lateral and axial resolutions, but in a cost of a slow acquisition process, one can use confocal FINCH systems like those proposed in [18,30].

## Appendix A

In this appendix, we calculate the transverse magnification of a dual lens FINCH system (Fig. 2) for an object located at the working region of the system. For simplicity, it is assumed that the index of refraction is no=ni=1 everywhere. Assume there is a single object point at $(xs,0,−zs)$ which is imaged by FINCH into two points, one is located before the CCD at $(−xsz1/zs,0,z1)$ and the other is located beyond the CCD at$(−xsz2/zs,0,z2)$. The relevant figure for the description in this appendix is Fig. 2, but in order to consider a more general situation, the cone perfect overlap assumption is not necessarily valid and therefore the system parameters zo, f1 and f2 are replaced in the analysis by the arbitrary distances zs, z1 and z2, respectively. Each of the above mentioned image point emits a spherical wave, and the two waves interfere on the CCD such that the obtained hologram is,

$H=|C1L[z1xszs(zh−z1)]Q[1zh−z1]eiθ+C2L[−z2xszs(z2−zh)]Q[−1z2−zh]|2,$
where $Q(a)=exp[iπaλ−1(x2+y2)]$ is a quadratic phase function, $L(s)=exp[i2πλ−1sx]$ is a linear phase function, C1 and C2 are constants and θ is the phase used for the phase-shifting procedure. Following the phase-shifting procedure the obtained final hologram is,
$HF=L[−xsz1zs(zh−z1)+−z2xszs(z2−zh)]Q[−1zh−z1+−1z2−zh]=L[−xs(z2−z1)zhzs(zh−z1)(z2−zh)]Q[−(z2−z1)(zh−z1)(z2−zh)]=L[−xszhzszr]Q[−1zr].$
If the hologram of Eq. (25) is located at the origin of the coordinates and illuminated by a plane wave, then beyond the hologram the light is focused to an image point at$(xszh/zs,0,zr).$Therefore, based on Eq. (25), the transverse magnification of FINCH is
$|MT|=zhzs,$
and the reconstruction distance of the image from the hologram is,
$zr=(zh−z1)(z2−zh)z2−z1.$
In the particular case where the object point is located at the working distance zo from the objective lens Lo, for which the cone overlap condition is satisfied, Eqs. (26) and (27) become,
$|MT|=zhzo,$
and,
$zr=(zh−f1)(f2−zh)f2−f1.$
Substituting Eq. (8) into Eq. (29) yields the following expression for the reconstruction distance,

$zr=f2f1(f2−f1)(f2+f1)2=zh(f2−f1)2(f2+f1).$

## Appendix B

In this appendix, the axial magnification of the dual lens FINCH shown in Fig. 2 is calculated. Since the definition of the axial magnification is dzr/dzs, one first needs to derive the expression of zr for object points shifted out from the working plane of the system. For an object point located a distance zs from the objective lens, the recorded hologram is

$H=|Q[1zs]Q[−1zo](Q[−1f1]eiθ+Q[−1f2])∗Q[1zh]|2 =|Q[1za+zh]eiθ+Q[1zb+zh]|2,$
where
$za=f1zef1−ze, zb=f2zef2−ze, ze=zozszo−zs=αzo1−α.$
Following a phase-shifting procedure, the obtained final hologram is
$HF=Q[1za+zh−1zb+zh].$
The reconstruction distance of the image from the hologram is,
$zr=(za+zh)(zb+zh)zb−za.$
Substituting Eqs. (32) and Eq. (8) yields,
$zr=(zh−f1)(f2−zh)f2−f1−zh2f1f2ze2(f2−f1)=zh(f2−f1)2(f2+f1)−zh2f1f2(1−α)2α2zo2(f2−f1).$
The axial magnification of FINCH is given by

$M¯A=dzrdzs=dzrdzedzedzs=2zh2f1f2(f2−f1)ze3dzedzs=2zh2f1f2(f2−f1)ze3ddzs(zozszo−zs) =2zh2f1f2(f2−f1)zo2α2ze=zh3(f1+f2)zo3(f2−f1)(1−α)α3.$

## Appendix C

In this appendix, we calculate the length magnification of the spots reconstructed from a hologram, whereas the hologram is recorded by the dual lens FINCH shown in Fig. 2. The length of the object spot is relative to$λzs2/(noR2)$, and the length of the image spot reconstructed along the z axis is relative to$λzr2/(niRH2)$. Therefore the axial magnification of the spot length is

$M¯Δ=|ΔiΔo|=|R2zr2RH2zs2|.$
The parameters zs, zr, RH in Eq. (37) are sensitive to the axial location of the object spot and therefore each of them is calculated separately in the following.

First, the radius of the hologram RH is calculated. The hologram of an axially shifted object spot is no longer recorded by two perfectly overlapping spherical beams. If the object is farther than the working distance, the beam converging beyond the CCD dictates the hologram radius while if the object is closer, the hologram radius is determined by the other beam converging to a point before the CCD. We deal with and distinguish between both cases by the use of the index k=1,2.

Based on triangle similarity in Fig. 2, the radius of the hologram is

$RH=|R(zk−zh)zk|, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2),$
where zk is the distance of the image spot from the SLM, calculated by the imaging equations,
$zk=zozsfkzsfk+zozs−zofk, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2).$
Substituting Eq. (39) into (38) and using the notation zs = αzo, yields the following expression for the radius of the hologram
$RH=R[αzofk−zh(αzo+(α−1)fk)]αzofk, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2).$
Substituting RH of Eq. (40), and zs = αzo into Eq. (37) yields
$M¯Δ=R2zr2RH2zs2=(zrfkαzofk−zh[αzo+|α−1|fk])2, k=1 if zh≤2z1z2/(z1+z2)k=2 if zh>2z1z2/(z1+z2),$
where zr is already given in Eq. (35). Substituting Eq. (35) and Eq. (8) into Eq. (41) yields

$M¯Δ=(zh[αzo(f2−f1)+2f2f1|α−1|]2α2zo2(f2−f1))2.$

## Acknowledgments

This work was supported by The Israel Ministry of Science and Technology (MOST), by The Israel Science Foundation (ISF) (Grant No. 439/12) and by the National Institutes of Health (NIH), National Institute of General Medical Sciences Award Number U54GM105814.

1. M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), Chap. 4.4.5, P. 165, Chap. 8.6.2, P. 414, Chap. 8.8, P. 435.

2. M. Young, “Pinhole optics,” Appl. Opt. 10(12), 2763–2767 (1971). [CrossRef]   [PubMed]

3. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]   [PubMed]

4. P. Bouchal, J. Kapitán, R. Chmelík, and Z. Bouchal, “Point spread function and two-point resolution in Fresnel incoherent correlation holography,” Opt. Express 19(16), 15603–15620 (2011). [CrossRef]   [PubMed]

5. X. Lai, S. Zeng, X. Lv, J. Yuan, and L. Fu, “Violation of the Lagrange invariant in an optical imaging system,” Opt. Lett. 38(11), 1896–1898 (2013). [CrossRef]   [PubMed]

6. R. Kelner, J. Rosen, and G. Brooker, “Enhanced resolution in Fourier incoherent single channel holography (FISCH) with reduced optical path difference,” Opt. Express 21(17), 20131–20144 (2013). [CrossRef]   [PubMed]

7. P. J. Peters, “Incoherent holograms with mercury light source,” Appl. Phys. Lett. 8(8), 209 (1966). [CrossRef]

8. G. Cochran, “New method of making Fresnel transforms with incoherent light,” J. Opt. Soc. Am. 56(11), 1513–1517 (1966). [CrossRef]

9. H. R. Worthington Jr., “Production of holograms with incoherent illumination,” J. Opt. Soc. Am. 56(10), 1397–1398 (1966). [CrossRef]

10. J. B. Breckinridge, “Two-dimensional white light coherence interferometer,” Appl. Opt. 13(12), 2760–2762 (1974). [CrossRef]   [PubMed]

11. G. Sirat and D. Psaltis, “Conoscopic holography,” Opt. Lett. 10(1), 4–6 (1985). [CrossRef]   [PubMed]

12. A. S. Marathay, “Noncoherent-object hologram: its reconstruction and optical processing,” J. Opt. Soc. Am. A 4(10), 1861–1868 (1987). [CrossRef]

13. S.-G. Kim, B. Lee, and E.-S. Kim, “Removal of bias and the conjugate image in incoherent on-axis triangular holography and real-time reconstruction of the complex hologram,” Appl. Opt. 36(20), 4784–4791 (1997). [CrossRef]   [PubMed]

14. M. K. Kim, “Adaptive optics by incoherent digital holography,” Opt. Lett. 37(13), 2694–2696 (2012). [CrossRef]   [PubMed]

15. D. N. Naik, G. Pedrini, and W. Osten, “Recording of incoherent-object hologram as complex spatial coherence function using Sagnac radial shearing interferometer and a Pockels cell,” Opt. Express 21(4), 3990–3995 (2013). [CrossRef]   [PubMed]

16. Y. Wan, T. Man, and D. Wang, “Incoherent off-axis Fourier triangular color holography,” Opt. Express 22(7), 8565–8573 (2014). [CrossRef]   [PubMed]

17. W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014). [CrossRef]   [PubMed]

18. R. Kelner, B. Katz, and J. Rosen, “Optical sectioning using a digital Fresnel incoherent-holography-based confocal imaging system,” Optica 1(2), 70–74 (2014). [CrossRef]

19. G. Pedrini, H. Li, A. Faridian, and W. Osten, “Digital holography of self-luminous objects by using a Mach-Zehnder setup,” Opt. Lett. 37(4), 713–715 (2012). [CrossRef]   [PubMed]

20. B. W. Schilling, T. C. Poon, G. Indebetouw, B. Storrie, K. Shinoda, Y. Suzuki, and M. H. Wu, “Three-dimensional holographic fluorescence microscopy,” Opt. Lett. 22(19), 1506–1508 (1997). [CrossRef]   [PubMed]

21. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48(34), H120–H136 (2009). [CrossRef]   [PubMed]

22. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company 2005), Chap. 6, P. 127, Chap. 5, P. 97, Chap. 9, P. 319.

23. F. A. Jenkins and H. E. White, Fundamentals of Optics (McGraw-Hill, 1985) Chap. 15, P. 333.

24. V. Micó, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic aperture superresolution with multiple off-axis holograms,” J. Opt. Soc. Am. A 23(12), 3162–3170 (2006). [CrossRef]   [PubMed]

25. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011). [CrossRef]   [PubMed]

26. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011). [PubMed]

27. B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM),” Opt. Express 20(8), 9109–9121 (2012). [CrossRef]   [PubMed]

28. O. Bouchal and Z. Bouchal, “Wide-field common-path incoherent correlation microscopy with a perfect overlapping of interfering beams,” J. Europ. Opt. Soc. Rap. Pub. 8, 13011 (2013).

29. N. Siegel, J. Rosen, and G. Brooker, “Reconstruction of objects above and below the objective focal plane with dimensional fidelity by FINCH fluorescence microscopy,” Opt. Express 20(18), 19822–19835 (2012). [CrossRef]   [PubMed]

30. N. Siegel and G. Brooker, “Improved axial resolution of FINCH fluorescence microscopy when combined with spinning disk confocal microscopy,” Opt. Express 22(19), 22298–22307 (2014). [CrossRef]   [PubMed]

### References

• View by:
• |
• |
• |

#### 2014 (4)

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### 2013 (4)

O. Bouchal and Z. Bouchal, “Wide-field common-path incoherent correlation microscopy with a perfect overlapping of interfering beams,” J. Europ. Opt. Soc. Rap. Pub. 8, 13011 (2013).

#### 1966 (3)

P. J. Peters, “Incoherent holograms with mercury light source,” Appl. Phys. Lett. 8(8), 209 (1966).
[Crossref]

#### Bouchal, O.

O. Bouchal and Z. Bouchal, “Wide-field common-path incoherent correlation microscopy with a perfect overlapping of interfering beams,” J. Europ. Opt. Soc. Rap. Pub. 8, 13011 (2013).

#### Bouchal, Z.

O. Bouchal and Z. Bouchal, “Wide-field common-path incoherent correlation microscopy with a perfect overlapping of interfering beams,” J. Europ. Opt. Soc. Rap. Pub. 8, 13011 (2013).

#### Gao, B. Z.

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### Li, Y.

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### Peng, X.

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### Peters, P. J.

P. J. Peters, “Incoherent holograms with mercury light source,” Appl. Phys. Lett. 8(8), 209 (1966).
[Crossref]

#### Qin, W.

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### Qu, X.

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### Yang, X.

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### Yao, H.

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### Appl. Phys. Lett. (1)

P. J. Peters, “Incoherent holograms with mercury light source,” Appl. Phys. Lett. 8(8), 209 (1966).
[Crossref]

#### J. Biomed. Opt. (1)

W. Qin, X. Yang, Y. Li, X. Peng, H. Yao, X. Qu, and B. Z. Gao, “Two-step phase-shifting fluorescence incoherent holographic microscopy,” J. Biomed. Opt. 19(6), 060503 (2014).
[Crossref] [PubMed]

#### J. Europ. Opt. Soc. Rap. Pub. (1)

O. Bouchal and Z. Bouchal, “Wide-field common-path incoherent correlation microscopy with a perfect overlapping of interfering beams,” J. Europ. Opt. Soc. Rap. Pub. 8, 13011 (2013).

#### Other (3)

M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), Chap. 4.4.5, P. 165, Chap. 8.6.2, P. 414, Chap. 8.8, P. 435.

J. W. Goodman, Introduction to Fourier Optics (Roberts & Company 2005), Chap. 6, P. 127, Chap. 5, P. 97, Chap. 9, P. 319.

F. A. Jenkins and H. E. White, Fundamentals of Optics (McGraw-Hill, 1985) Chap. 15, P. 333.

### Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

### Figures (6)

Fig. 1 Schematics of a single lens imaging system.
Fig. 2 Schematics of FINCH: (a) Recording system. (b) Reconstruction system.
Fig. 3 Lateral magnification ratio versus the axial object location for: (a) Four values of f1,2. (b) Four values of working distance zo with constant values of f2 = 140cm and f1 = 66.32cm.
Fig. 4 Axial magnification ratio versus the axial object location for: (a) Four values of f1,2. (b) Four values of working distance zo with constant values of f2 = 140cm and f1 = 66.32cm.
Fig. 5 Experimental results with RC1 (NBS 1963A) located at a fixed location of 30cm away from the objective lens and RC2 (1951 USAF) located at various zs locations of (a) 27cm to (g) 30cm. Left-hand column: two-lens imager with RC1 plane in focus; central column: FINCH reconstruction of RC1 plane of best focus; right-hand column: FINCH reconstruction of RC2 plane of best focus.
Fig. 6 Experimental results with RC1 (NBS 1963A) located at a fixed location of 30cm away from the objective lens and RC2 (1951 USAF) located at various zs locations of (a) 30cm to (g) 33cm. Left-hand column: two-lens imager with RC1 plane in focus; central column: FINCH reconstruction of RC1 plane of best focus; right-hand column: FINCH reconstruction of RC2 plane of best focus.

### Equations (42)

$n o y o tan θ o = n i y i tan θ i ,$
$n o y o R λ z o = n i y i R λ z i .$
$M T M W = 1.$
$y i W i = y o W o ∝ y o n o R λ z o .$
$| M T M W | > 1 ,$
$| M ¯ T | = z h z s ,$
$z r = ( z h − z 1 ) ( z 2 − z h ) z 2 − z 1 .$
$z h = 2 f 1 f 2 f 1 + f 2 .$
$R H = | R ( z k − z h ) z k | , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) .$
$tan θ r = R H z r = | R ( z k − z h ) / z k ( z h − z 1 ) ( z 2 − z h ) / ( z 2 − z 1 ) | = | R ( z 2 − z 1 ) z k ( z 3 − k − z h ) | , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) .$
$| M ¯ W | = | W i W o | = | tan θ o tan θ r | = | R / z s R ( z 2 − z 1 ) / z k ( z 3 − k − z h ) | = | z k ( z 3 − k − z h ) z s ( z 2 − z 1 ) | , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) .$
$| M ¯ T M ¯ W | = | z h / z s z k ( z 3 − k − z h ) / z s ( z 2 − z 1 ) | = | z h ( z 2 − z 1 ) z k ( z 3 − k − z h ) | , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) .$
$| M ¯ T M ¯ W | = 2.$
$z k = z o z s f k z s f k + z o z s − z o f k , k = 1 , 2$
$| M ¯ T M ¯ W | = 2 α z o ( f 2 − f 1 ) 2 f 2 f 1 | 1 − α | + α z o ( f 2 − f 1 ) .$
$| α − 1 | α = z o ( f 2 − f 1 ) 2 f 2 f 1 .$
$M Δ = Δ i Δ o = n o z i 2 n i z o 2 = n i n o M W 2 = n i n o M T 2 ,$
$M A = | d z i d z o | = n i n o M T 2 .$
$M A M Δ = 1.$
$M ¯ A = | d z r d z s | = | z h 3 ( f 1 + f 2 ) z o 3 ( f 2 − f 1 ) ( 1 − α ) α 3 | .$
$M ¯ Δ = | Δ i Δ o | = ( z h [ α z o ( f 2 − f 1 ) + 2 f 2 f 1 | α − 1 | ] 2 α 2 z o 2 ( f 2 − f 1 ) ) 2 .$
$| M ¯ A M ¯ Δ | = | 8 f 1 f 2 ( 1 − α ) α z o ( f 2 − f 1 ) [ α z o ( f 2 − f 1 ) + 2 f 1 f 2 | 1 − α | ] 2 | .$
$| M ¯ A M ¯ Δ | ≅ | 8 f 1 f 2 ( 1 − α ) z o ( f 2 − f 1 ) | .$
$H = | C 1 L [ z 1 x s z s ( z h − z 1 ) ] Q [ 1 z h − z 1 ] e i θ + C 2 L [ − z 2 x s z s ( z 2 − z h ) ] Q [ − 1 z 2 − z h ] | 2 ,$
$H F = L [ − x s z 1 z s ( z h − z 1 ) + − z 2 x s z s ( z 2 − z h ) ] Q [ − 1 z h − z 1 + − 1 z 2 − z h ] = L [ − x s ( z 2 − z 1 ) z h z s ( z h − z 1 ) ( z 2 − z h ) ] Q [ − ( z 2 − z 1 ) ( z h − z 1 ) ( z 2 − z h ) ] = L [ − x s z h z s z r ] Q [ − 1 z r ] .$
$| M T | = z h z s ,$
$z r = ( z h − z 1 ) ( z 2 − z h ) z 2 − z 1 .$
$| M T | = z h z o ,$
$z r = ( z h − f 1 ) ( f 2 − z h ) f 2 − f 1 .$
$z r = f 2 f 1 ( f 2 − f 1 ) ( f 2 + f 1 ) 2 = z h ( f 2 − f 1 ) 2 ( f 2 + f 1 ) .$
$H = | Q [ 1 z s ] Q [ − 1 z o ] ( Q [ − 1 f 1 ] e i θ + Q [ − 1 f 2 ] ) ∗ Q [ 1 z h ] | 2 = | Q [ 1 z a + z h ] e i θ + Q [ 1 z b + z h ] | 2 ,$
$z a = f 1 z e f 1 − z e , z b = f 2 z e f 2 − z e , z e = z o z s z o − z s = α z o 1 − α .$
$H F = Q [ 1 z a + z h − 1 z b + z h ] .$
$z r = ( z a + z h ) ( z b + z h ) z b − z a .$
$z r = ( z h − f 1 ) ( f 2 − z h ) f 2 − f 1 − z h 2 f 1 f 2 z e 2 ( f 2 − f 1 ) = z h ( f 2 − f 1 ) 2 ( f 2 + f 1 ) − z h 2 f 1 f 2 ( 1 − α ) 2 α 2 z o 2 ( f 2 − f 1 ) .$
$M ¯ A = d z r d z s = d z r d z e d z e d z s = 2 z h 2 f 1 f 2 ( f 2 − f 1 ) z e 3 d z e d z s = 2 z h 2 f 1 f 2 ( f 2 − f 1 ) z e 3 d d z s ( z o z s z o − z s ) = 2 z h 2 f 1 f 2 ( f 2 − f 1 ) z o 2 α 2 z e = z h 3 ( f 1 + f 2 ) z o 3 ( f 2 − f 1 ) ( 1 − α ) α 3 .$
$M ¯ Δ = | Δ i Δ o | = | R 2 z r 2 R H 2 z s 2 | .$
$R H = | R ( z k − z h ) z k | , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) ,$
$z k = z o z s f k z s f k + z o z s − z o f k , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) .$
$R H = R [ α z o f k − z h ( α z o + ( α − 1 ) f k ) ] α z o f k , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) .$
$M ¯ Δ = R 2 z r 2 R H 2 z s 2 = ( z r f k α z o f k − z h [ α z o + | α − 1 | f k ] ) 2 , k = 1 i f z h ≤ 2 z 1 z 2 / ( z 1 + z 2 ) k = 2 i f z h > 2 z 1 z 2 / ( z 1 + z 2 ) ,$
$M ¯ Δ = ( z h [ α z o ( f 2 − f 1 ) + 2 f 2 f 1 | α − 1 | ] 2 α 2 z o 2 ( f 2 − f 1 ) ) 2 .$