Abstract

Fresnel Incoherent Correlation Holography (FINCH) allows digital reconstruction of incoherently illuminated objects from intensity records acquired by a Spatial Light Modulator (SLM). The article presents wave optics model of FINCH, which allows analytical calculation of the Point Spread Function (PSF) for both the optical and digital part of imaging and takes into account Gaussian aperture for a spatial bounding of light waves. The 3D PSF is used to determine diffraction limits of the lateral and longitudinal size of a point image created in the FINCH set-up. Lateral and longitudinal resolution is investigated both theoretically and experimentally using quantitative measures introduced for two-point imaging. Dependence of the resolving power on the system parameters is studied and optimal geometry of the set-up is designed with regard to the best lateral and longitudinal resolution. Theoretical results are confirmed by experiments in which the light emitting diode (LED) is used as a spatially incoherent source to create object holograms using the SLM.

© 2011 OSA

1. Introduction

3D imaging has recently been the subject of increased interest for its numerous applications in molecular biomedical investigation, on-line industrial product inspection, micro-fabrication investigation, etc. In bioimaging, the scanning confocal microscopy and the wide-field imaging are applied as the main 3D imaging techniques [1]. The confocal microscopy provides excellent sectioning with high axial resolution achieved by rejecting out-of-focus light but scanning slows down the imaging process. The wide-field microscopy gives real-time 2D imaging but construction of 3D image requires z-scanning and complicated data processing. 3D imaging of incoherently illuminated objects can also be realized by the scanning holography [2], or by the projection holography based on computed holograms, synthesized from tens of images of the object captured from different view angles [3, 4].

Recently, the Fresnel Incoherent Correlation Holography (FINCH) has been proposed for 3D imaging of incoherently illuminated objects [5]. This method combines principles of optical and digital holography with the spatial light modulation technique. Spatially incoherent light that is scattered by the observed object is collimated, transformed by a SLM, and subsequently captured by a CCD camera. The SLM splits the incident waves and the set-up operates as a one-way interferometer producing holograms of separate object points. Subsequently, the holograms are digitally reconstructed by applying the Fresnel transform [5]. Recently, the FINCH was tested for mercury arc lamp illumination [5] and for fluorescent objects [6]. The method was successfully used to design a fluorescent microscope [7] and a holographic synthetic aperture system [8, 9]. Although the method was demonstrated in a variety of applications, a complete mathematical description, which allows detailed analysis of the FINCH imaging, has not yet been published. In [5, 6, 10], the FINCH was described by a formal notation, which did not allow analysis of the geometrical parameters of the system or accurate assessment of the properties of digital images. In the last article [11], interference records of a point object were described by geometric parameters of the system and the lateral magnification of digital image was determined. The lateral magnification then was used to discuss the influence of geometric parameters on the lateral resolution.

This article presents wave optics model of FINCH, which considers the real parameters of the experimental set-up and uses a variable Gaussian aperture for a spatial bounding of light waves. Mathematical operations that were in earlier articles only formally outlined are performed analytically, in the proposed model. The result is the 3D PSF, valid for both an optical and digital part of the FINCH imaging. We have verified that the relationship between geometrical parameters designed in [10, 11] for optimum lateral resolution is not optimal in terms of physical criteria applied to two-point imaging, and results in a reduction of the lateral resolution provided by the collimating optics. In addition, we first determined the diffraction limit for longitudinal resolution of the FINCH imaging and made a proposal for an optimal geometric configuration with regard to the best longitudinal resolution. Furthermore, we showed that digitally reconstructed image is fully coherent even when the object is illuminated by spatially incoherent light. This important fact was not discussed in previous studies concerning optimal resolution in FINCH [5, 11]. Analysis of the geometrical parameters of the FINCH imaging presented in [11] has been extended to calculate the longitudinal magnification of 3D objects. In the experimental part of the paper, we reconstructed the 3D PSF from experimental data and compared it with theoretical predictions. Transverse and longitudinal resolution was experimentally investigated for various system parameters and the best values available in optimal configuration were determined. LED of wavelength 632 nm was used as a source of incoherent light in accomplished experiments.

2. Computational model of FINCH

The basic principle of FINCH can be explained by Fig. 1 that shows both the optical and digital part of experiment. The observed object is placed near the focal plane of collimating lens with the focal length f 0 and then illuminated by incoherent source. Light scattered by the object is collimated, transformed by the SLM and captured by the CCD camera. The SLM acts as a diffractive beam splitter, so the waves impacting it are doubled. The light from each point of the object is split into signal and reference waves that interfere and create a Fresnel zone structure on the CCD. The resulting hologram of the observed object is created as an incoherent superposition of interference patterns of individual points. In experimental part of FINCH, three holograms of the object Ij, j = 1,2,3, are recorded with different phase shift of signal and reference wave. The recorded holograms are then processed and observed object is digitally reconstructed using the Fresnel transform.

 

Fig. 1 Illustration of the basic principle of FINCH.

Download Full Size | PPT Slide | PDF

In optics, the PSF is used as a basic imaging function that allows to examine the characteristics and quality of the created image. To study the properties of the FINCH imaging, the PSF of the entire imaging chain that includes both an optical and digital part must be determined. In previous articles, this task was formulated using a formal notation, but the precise mathematical description of the intensity distribution in the digital image is still not available.

In this paper, the PSF is calculated for an arbitrary object point P 0 with the coordinates (x 0, y 0, z 0), where z 0 < 0 is used if the object lies in front of the collimating lens. Its paraxial image Pr is created by the collimating lens and has coordinates (xr, yr, zr) measured in the coordinate system with origin at the center of the CCD (see Fig. 1). The image is formed by paraxial rays, which represent normals of a paraboloidal wave converging to the image point Pr. This paraboloidal wave is incident on the SLM and splits into reference and signal waves with complex amplitudes Ur and Usj. The splitting of incoming waves is ensured through the SLM transmission function of the form

tj=a+bexp[i(θϑj)],j=1,2,3,
where a and b are coefficients enabling an optimal energy distribution between signal and reference waves, ϑj denotes a constant phase shift, θ = k(x 2 + y 2)/(2fd) is a quadratic phase of a lens with the focal length fd, and k denotes the wave number. Complex amplitude of the light field behind the SLM can be written as
Uj=Ur+Usj=arexp(iΦr)+asexp[i(Φsϑj)],
where ar and as are amplitudes and Φr and Φs phases of reference and signal waves converging to the points Pr and Ps, respectively. Using paraxial approximation, the phases Φr and Φs can be expressed as
Φj=kzj[1+(xcxj)2+(ycyj)22zj2],j=r,s,
where (xc, yc) are coordinates of the CCD camera plane, zr = z0 – Δ1 – Δ2 and zs = zm – Δ2. Δ1 and Δ2 denote the separation distances between the collimating lens, SLM and CCD, and z 0 and z0 and zm and zm are object and image distances for the collimating lens and the SLM lens, respectively (see Fig. 1). The reference and signal waves originate from the same object point P 0 and are spatially coherent. If the difference of their optical paths is less than coherence length of the source, the waves interfere at the CCD, and create an interference pattern with the shape of the Fresnel zone plate,
Ij=ar2+as2+2arascos(ΦsΦrϑj).j=1,2,3.
Holograms of the object are prepared for three different phase settings ϑ 1 = 0, ϑ 2 = 2π/3 and ϑ 3 = 4π/3. Intensity records Ij are then processed to remove holographic twin image. In this way, a complex function T is created which is used for digital image reconstruction [5],
T=I1[exp(iϑ3)exp(iϑ2)]+I2[exp(iϑ1)exp(iϑ3)]+I3[exp(iϑ2)exp(iϑ1)].
By using Eq. (4), T can be simplified to the form
T=Aexp[i(ΦsΦr)],
where A=i33aras. If the phase difference Φs – Φr is arranged by using Eq. (3), we can write
T=Aexp(iΩ0)exp{ik2fl[(xcX)2+(ycY)2]}.
T represents a quadratic phase of a lens. In accordance with [11] it is termed the diffractive lens, in this article. The focal length of the diffractive lens is given as 1/fl = 1/zs − 1/zr and its axis is shifted laterally relative to the origin of coordinate system of the object space. The focal length fl and the shift of the axis (X′, Y′) are uniquely determined by the position of object points and system parameters. Applying a lens equation, we can write
fl=(1F)[(1F)fdΔ2],
where
F=Δ2(z0+f0)z0f0Δ1(z0+f0),
X=x0m,Y=y0m,
where
m=f0Δ2z0(f0Δ1)f0Δ1.
As will be demonstrated later, a digital image is produced coherently. The phase function Ω0 is then used to determine the conditions of interference in the reconstructed image. Its dependence on the object position can be expressed as
Ω0=k(Δ2+fdF)[Δ22Ff02F2(z0+f0)2(x02+y02)].

In FINCH, 3D objects are reconstructed using a complex function T, which is created from intensity records Ij according to Eq. (5). Complex amplitude of the image U′ is then obtained as a convolution of T and impulse response function of free propagation of light, h. When calculating the PSF, a quadratic phase of the digital lens Eq. (7) is used in the convolution. In this case we can write,

U(r)=Q(xc,yc)T(xc,yc)h(xxc,yyc,z)dxcdyc,
where r′ = (x′, y′, z′) is the position vector of image space, defined in the coordinate system with origin at the center of the CCD camera, Q denotes the aperture function, which simulates a transverse confinement of waves interfering at the CCD, and h is the free space impulse response function. Substituting Eq. (7) into Eq. (13) and applying the Fresnel approximation of h [12], we obtain
U(r)=iAλzexp(iΓ)Q(xc,yc)exp[ik(xc2+yc22(1fl1z)]×exp{i2π[xcX¯+ycY¯]}dxcdyc,
where
Γ=Ω0kz+kX2+Y22flkxi2+yi22z,
X¯=1λ(xzXfl),Y¯=1λ(yzYfl),
and λ denotes the wavelength. Intensity distribution I′ = |U′|2 defines a diffraction limited PSF, including both an optical and digital part of the FINCH imaging.

3. Geometrical optics approach of FINCH imaging

The PSF can be calculated analytically under simplified conditions which allow to perform the Fourier transform Eq. (14). The task is easiest in the approximation of ray optics, which does not consider the spatial bounding of light. In this case, the aperture function becomes constant, Q = 1. At the distance z′ = fl, a quadratic phase term in Eq. (14) is eliminated. This situation corresponds to a sharp image determined by the Dirac delta function,

U(x,y,fl)=iAλflexp(iΓ)δ(X¯,Y¯).
Equation (17) shows that the object point P 0(x 0, y 0, z 0) is digitally reconstructed as an image point P′ whose position is given by the coordinates x′ = X′, y′ = Y′, and z′ = fl. The symbol m used in Eq. (10) then represents the lateral magnification defined as the ratio between the image size and its true size in object space, m = x′/x 0 = y′/y 0. Ray optics provides a simple interpretation of FINCH imaging in which the longitudinal position of the object point z 0 determines the focal length of the diffractive lens and the lateral position (x 0, y 0) causes displacement of the lens axis. Paraxial image always lies on the axis of the lens, so it has image space telecentricity. This feature of the diffractive lens was used in [11] to determine the lateral magnification, but the link between the image position and the transverse shift of the lens axis has not been clarified. The lateral magnification Eq. (11) agrees with that which was determined in [11] and demonstrates its dependence on set-up parameters and the object distance z 0. It is interesting to note that the lateral magnification is independent of the focal length of the lens implemented by the SLM. The derivative dm/dz 0 shows that the dependence of m on z 0 can be eliminated by appropriate choice of Δ1,
dmdz0=(Δ1f0)m2f0Δ2.
When the SLM is positioned at the back focal plane of the collimating lens (Δ1 = f 0), m becomes independent of the object distance z 0. In this special case, the lateral magnification is given as m = –Δ21 and the FINCH imaging shows object-space telecentricity.

The longitudinal magnification of the FINCH imaging is defined as α = dfl/dz 0. Using Eq. (8) it can be written as

α=f02[2fd(1F)Δ2]F2Δ2(z0+f0)2,
where F is given by Eq. (9). Derived relations for fl, m and α are valid for a general position of observed object and can be used in the analysis of two special cases.

Special case I: Planar object located in focal plane of collimating lens

Relations for the geometric parameters of the imaging Eqs. (8), (11) and (19) are simplified when a planar object is placed in the focal plane of the collimating lens, z 0 → – f 0. In this case, we can write

flFlimz0f0{fl}=fdΔ2,
mFlimz0f0{m}=Δ2f0,
αFlimz0f0{α}=Δ22fdf0mF.

Special case II: Lensless FINCH imaging

The FINCH imaging can also be implemented in a simplified system in which the object is recorded without the use of collimating lens [10]. In this case, Eqs. (8), (11) and (19) are used in the limit f 0 → ∞ and with Δ1 = 0,

flLlimf0{fl}=(1mL)[(1mL)fdΔ2],
mLlimf0{m}=Δ2z0,
αLlimf0{α}=[2fd(1mL)Δ2]mL2Δ2.

4. PSF for diffraction limited FINCH imaging

In order to examine the diffraction limited FINCH imaging, a lateral restriction of light in the system must be considered. The proposed model takes into account the fixed apertures of optical components and also considers a variable size of holograms recorded on the CCD. The holograms are laterally confined by a Gaussian function, which allows to calculate the PSF analytically. Aperture function used in Eq. (13) then takes the form

Q=exp(xc2+yc2Δρc2),
where Δρc is the radius of holograms of a point object. It depends on the experiment geometry and can be determined by a ray tracing method. In some cases, the size of the hologram is limited by detection conditions and must be determined by means of the Nyquist criterion. The size of the recorded hologram used in Eq. (26) is then defined as
Δρc=min{ρCCD,Δρr,ΔρN},
where 2ρCCD is transverse size of the CCD camera and Δρr and ΔρN denote a hologram radius obtained applying the ray tracing method and the Nyquist criterion, respectively. Calculation of Δρc will be given later for the actual configuration of the experiment.

When using the Gaussian aperture, a point object can be reconstructed by analytical calculation. Complex amplitude of an image is obtained by Eq. (14) and can be written as

U(r)=Aexp(iΩ0)u0(r),
where
u0(r)=iπλzqexp[m2(x02+y02)γiπ(x2+y2)λz]×exp{π2q[(xλzimx0γπ)2+(yλzimy0γπ)2]},
γ=1Δρc2iπλz,
q=1Δρc2+iπλ(1z1fl).
The PSF of the FINCH imaging can then be written as the 3D intensity distribution normalized by the intensity of a paraxial image point,
IN(r)=|U(r)|2|U(x0m,y0m,fl)|2.
The PSF can be simply discussed for axial object point with the coordinates x 0 = y 0 = 0. Transverse intensity profile of a perfectly focused digital image in the plane z′ = fl is given by Eq. (32) used with Eq. (28) and can be written in the form
IN(r)=exp(2π2NA2r2λ2),
where r′ = (x2 + y2)1/2 and NA′ may be understood as the numerical aperture of a diffractive lens, NA′ = Δρc/fl.

The radius of a Gaussian diffraction spot Δr′ can be defined by a fall of normalized intensity to the value INr′) = 1/e 2,

Δr=λπNA.
For analysis of the longitudinal resolution, a depth of field of the reconstructed image must be known. Image defocusing can be evaluated using the Strehl ratio D defined as
D=|U(0,0,fl+Δz)|2|U(0,0,fl)|2.
It describes a change of axial intensity in dependence on a focus error Δz′. By using Eq. (28), the Strehl ratio can be arranged to the form
D=[(1+Δzfl)2+Δz2NA2Δr2]1.
The depth of focus Δz′ is then defined by a tolerated decrease of the Strehl ratio D. Assuming that fl >> Δr′, we can write
Δz=±λπNA21D1.
Equations (34) and (37) are of general validity, but the numerical aperture NA′ must be analyzed separately for the system with a collimating lens and the lensless experiment.

FINCH with collimating lens

If we assume an object in the focal plane of the collimating lens, fl is given by Eq. (21) and the radius of intensity records Δρc can be simply estimated using the ray tracing and the Nyquist criterion. By means of ray optics, the radius Δρr can be determined by a simple relation

Δρr=ρSLMfd(fdΔ2),
where 2ρSLM denotes a transverse size of the SLM. When the spatial period of recorded interference patterns is required to fulfill the Nyquist criterion, the radius ΔρN is approximately given by
ΔρN=(fdΔ2)λ2ΔxCCD,
where ΔxCCD is the pixel size of the CCD camera. For real experimental parameters and assumption Δ2 < 2fd, Δρc = Δρr can be used.

When using Δρc given by Eq. (38), the aperture of the diffractive lens is equal to the aperture of the modulator lens, NA′ = ρSLM/fd. For the Strehl coefficient D = 1/e 2, the transverse and longitudinal size of the diffraction spot can be written as

2Δr=2λπNA,2Δz=5ΔrNA.
It is interesting to note that the size of diffraction spot remains the same when the separation distance Δ2 is changed.

Intensity profile in the plane (x,z′) obtained using Eq. (28) is illustrated in Fig. 2(b) for parameters f 0 = 200 mm, fd = 750 mm, Δ1 = 250 mm and Δ2 = 600 mm. For comparison, the PSF reconstructed from the CCD records is shown in Fig. 2(a). The parameters of the experiment were the same as those used in the calculation. As can be observed in Figs. 2(a) and 2(b), the size and shape of the demonstrated PSFs is in very good agreement.

 

Fig. 2 Comparison of 3D Point Spread Function of FINCH imaging obtained by (a) experiment and (b) simulation model for parameters f 0 = 200 mm, fd = 750 mm, Δ1 = 250 mm, and Δ2 = 600 mm.

Download Full Size | PPT Slide | PDF

Lensless FINCH

When the lensless configuration is used with |z 0| < fd, the radius of recorded holograms is given as Δρc = ρCCD. The numerical aperture of the diffractive lens then can be written as NA′ = ρCCD/flL where flL is given by Eq. (23). In this case, the transverse and longitudinal size of the diffraction spot Eq. (40) depends on the separation distance between the SLM and the CCD.

5. Optimal geometric configuration of FINCH

In conventional imaging systems, the Lagrange invariant can be used to express a lateral magnification as a ratio of object and image numerical apertures, m = NA/NA′. If the Rayleigh criterion is applied to a diffraction limited image, two points are resolved for their mutual distance Δr′ ≈ λ/NA′ corresponding to a radius of image spot of a point object. The related distance in an object space can be written as Δr = Δr′/mλ/NA, so that resolution is also diffraction limited. In FINCH including both optical and digital imaging, the Lagrange invariant is not inherently fulfilled. Resolution enabled by an object aperture NA is not fully exploited in reconstructed image, if parameters of the set-up are inappropriately chosen. In presented analysis, an optimal geometry of experiment is discussed for two different configurations of the FINCH set-up.

FINCH imaging with collimating optics

To simplify discussion, an object placed near the front focal plane of a collimating lens is considered. In this case, Eqs. (21) and (22) can be used for the lateral magnification mF and the focal length flF of the diffractive lens. Using assumption that transverse dimensions of the collimating lens and the SLM are the same, the object and image numerical apertures can be written as NA = ρSLM/f 0 and NA′ = Δρc/flF. Using Eq. (38), the ratio of the numerical apertures can be written as NA/NA′ = |mF|fd /Δ2. Lateral resolution in object space is given as Δr = Δr′/|mF|. Using the image resolution Eq. (40), Δr can be expressed as

Δr=max{Δr0,fdΔ2Δr0},
where Δr 0 = λf 0/(πρSLM) is the diffraction limited object resolution of the used collimating lens. Dependence of Δr on the ratio Δ2/fd is shown in Fig. 3 for different focal lengths of collimating lenses. If Δ2fd, the lateral resolution of the FINCH imaging is limited by the resolution of the collimating optics. In FINCH set-up with Δ2 < fd, the lateral resolution of the collimating optics is reduced. In [11], an optimal FINCH configuration was analyzed using Eq. (15) that takes into account the size of the reconstructed image expressed by the parameter a. If its value is chosen owing to the technical conditions as a = 1, the separation distance Δ2 = fd/2 is obtained [10,11]. In this case, the lateral resolution of the two-point object is two times worse than the diffraction limited resolution of the collimation lens Δr 0. When Eq. (15) in [11] is used with a = 0, the CCD position ensuring the best two-point resolution is obtained.

 

Fig. 3 Dependence of transverse resolution Δr and longitudinal resolution Δz on the ratio Δ2/fd for FINCH set-up using the collimating lens (fd = 750 mm, Δ1 = 250 mm, and f 0 = 100 mm, - - - , f 0 = 150 mm, —, f 0 = 200 mm, -.-.-).

Download Full Size | PPT Slide | PDF

The longitudinal resolution in object space is given as Δz = Δz′/|αF|, where αF denotes the longitudinal magnification. Using Eq. (22), Δz can be rewritten as

Δz=fd2Δ2|Δ22fd|Δz0,
where Δz0=λf02/(πρSLM2) is the longitudinal resolution of the collimating lens. For Δ2 = fd, the longitudinal resolution is limited by the collimating optics, Δz = Δz 0. In other cases, the resolution is reduced by the FINCH set-up and Δz > Δz 0. Dependence of Δz on the ratio Δ2/fd is shown in Fig. 3. Technical difficulty related to Δ2 = fd setting will be discussed in the experimental section.

Lensless FINCH imaging

In lensless FINCH, each point of object emits a spherical wave which is captured by the SLM and subsequently divided into reference and signal waves interfering in the CCD. The lateral resolution in object and image space are related as Δr = Δr′/|mL|, where Δr′ can be obtained by Eq. (40) and mL is given by Eq. (25). Assuming fd > 0 and |z 0| < fd, the numerical aperture of diffractive lens can be written as NA′ = ρCCD/flL, where flL is given by Eq. (23). The lateral resolution Δr then can be expressed in the form,

Δr=λ|z0|flLπρCCDΔ2.
It depends on all geometrical parameters of the experiment, because the dependence on fd is hidden in flL. For practical purposes it is important to optimize the distance Δ2 for the given parameters z 0 and fd. With derivative Δr/∂Δ2 it can be shown that the best resolution is obtained for
Δ2=z0(1+z0fd)1/2.
With the optimal distance given by Eq. (44), the best lateral resolution can be written as
Δr=λπNA,
where NA′ = ρCCD/[2(2fd + z 0)]. Dependence of Δr on the ratio Δ2/fd is shown in Fig. 4 for fd = 750 mm and object distances z 0 = −50 mm, −75 mm and −100 mm. For |z 0| << fd, an optimal distance between the SLM and CCD is approximately given as Δ2 ≈ –z 0 and best resolution in these planes remains unchanged.

 

Fig. 4 Dependence of the transverse resolution Δr and the longitudinal resolution Δz on the ratio Δ2 /fd for lensless FINCH (fd = 750 mm, z 0 = -50 mm, - - - , z 0 = −75 mm, —, and z 0 = −100 mm, -.-.-).

Download Full Size | PPT Slide | PDF

The longitudinal resolution in object space is defined as Δz = Δz′/αL. For chosen parameters of the experiment, the optimal position of the CCD camera Δ2 can be determined from the condition

2αLflLΔ2flLαLΔ2=0.
Unlike the case of lateral resolution, the best longitudinal resolution depends on the location of the object z 0. Dependence of Δz on the ratio Δ2/fd is shown in Fig. 4.

6. Coherent digital reconstruction of two-point object

Analysis of resolving power of FINCH is performed for imaging of two point sources Pj(xj, yj, zj), j = 1, 2, emitting uncorrelated light waves of equal amplitude. If the intensity records Eq. (4) are performed for two-point object, the complex function T given by Eq. (5) can be written as

T=Aj=12exp(iΩj)exp{ik2flj[(xcXj)2+(ycYj)2]}.
It represents a superposition of two diffractive lenses whose focal length flj and lateral shift of axes (Xj, Yj) depend on coordinates of the object points (xj , yj , zj), j = 1, 2, and on parameters of the set-up. Digital image of two-point object is obtained by the convolution Eq. (13) applied to T given by Eq. (47). Its complex amplitude UTP can be written as
UTP(r)=Aj=12exp(iΩj)uj(r).
The used symbols Ωj and uj are given by Eqs. (12) and (30) where the coordinates (x 0, y 0, z 0) are replaced by (xj, yj, zj), j = 1, 2. Though the observed point sources emit incoherent light, the digital image is linear in complex amplitude and the total intensity ITP= |UTP|2 is given by interference law,
ITP(r)=I1+I2+2I1I2cos(Δϕ),
where
Ij(r)=|A|2uj(r)|2,
Δφ(r)=Ω1Ω2+κ1κ2,
κj=arctan{uj(r)}{uj(r)},j=1,2.
Equation (49) describes 3D intensity distribution in a digital image of two-point object and can be used for analysis of lateral and longitudinal resolutions of FINCH imaging. The lateral resolution is examined for object points P 1(x 0 + Δx/2, 0, z 0) and P 2(x 0 – Δx/2,0, z 0). As a criterion we use a quantity Vx defined as
Vx=max{IxpeakIxcentreIxpeak+Ixcentre,0},
where
Ixpeak=12{ITP[m(x0+Δx/2),0,z]+ITP[m(x0Δx/2),0,z]},
Ixcentre=ITP(mx0,0,z).
As Eq. (53) is formally identical with a visibility of interference patterns, Vx is called the lateral visibility in this paper.

Two-point resolution in coherent light depends on the relative phase Δϕ associated with the observed sources. When Δϕ = π/2, the intensity of the image is identical to that resulting from incoherent sources. When the sources are in phase (Δϕ = 0), the resolution is worse in coherent light than in incoherent light. If sources are in phase opposition (Δϕ = π), destructive interference occurs and the point sources are better resolved with coherent illumination than with incoherent light. Detailed analysis of the resolution in coherent light can be found in [13]. If observed object points P 1 and P 2 are placed symmetrically with respect to the optical axis (x 0 = 0), their diffraction images interfere constructively at axial point P′(0, 0, z′) and Δϕ = 0 for an arbitrary separation distance Δx. For the two-point object laterally shifted to the position x 0 ≠ 0, the phase difference Δϕ oscillates in interval < 0, 2π > if the distance of points Δx is slightly changed. In calculated visibility, fast oscillations appear in dependence of Vx on Δx. When visibility is determined from experimental data, the oscillations do not occur. The reason is that the change in the relative phase causes slight changes of interference pattern that are not resolved in the CCD detection.

The longitudinal resolution is studied for two axial points specified by coordinates P 1(0, 0, z 0 + Δz/2) and P 2(0, 0, z 0 – Δz/2). The longitudinal visibility Vz then can be defined as

Vz=max{IzpeakIzcentreIzpeak+Izcentre,0},
where
Izpeak=12{ITP[0,0,α(z0+Δz/2)]+ITP[0,0,α(z0Δz/2)]},
Izcentre=ITP[0,0,αz0].
Comparison of theoretical and measured visibilities is presented in the experimental section.

7. Experimental determination of resolving power of FINCH imaging

In the proposed experiment, a two-point incoherent source was recorded on the CCD camera and its digital reconstruction was performed. The FINCH imaging was analyzed for different mutual lateral and longitudinal positions of point sources and the consistency between the experimentally determined and calculated resolving power was compared.

7.1. Design and parameters of the experimental set-up

The set-up used for measurement of FINCH resolution is shown in Fig. 5. Spatially incoherent light emitted by the collimated LED (Thorlabs M625L2-C1, 625 nm, 280 mW) is transmitted through the laser line filter (Thorlabs FL632.8-3 with FWHM 3 nm) and coupled into two single mode fibers (Thorlabs SM 600, MFD 4.3 μm) with holders mounted to XYZ travel translation stages. The faces of the fibers are used as point sources, which can be precisely positioned in an object space near the focal plane of collimating lens. The collimated beams pass through polarizer, iris diaphragm and beam splitter and fall on the reflective SLM Hamamatsu X10468 (16 mm x 12 mm, 792 x 600 pixels). The polarizer is used to optimize a phase operation of the SLM. The SLM is addressed by computer generated holograms enabling splitting of the input beam to the signal and reference waves. The reference wave is transmitted with unchanged wavefront shape. The signal wave is created by a quadratic phase modulation equivalent to the action of lens with the focal length fd and successively three different phase shifts 0, 2π/3 and 4π/3 are imposed on it. Light reflected by the SLM is diverted to the CCD camera (QImaging Retiga-4000R, 15 mm x 15mm, 2048 x 2048 pixels) and three records of interference patterns created by the signal and reference waves are subsequently acquired. The intensity records are processed in a PC and digital image of observed object is created. In this way, the 3D PSF was restored and the transverse and longitudinal resolution of two-point object was examined for various parameters of the set-up.

 

Fig. 5 Experimental set-up for measurement of transversal and longitudinal resolution of FINCH imaging.

Download Full Size | PPT Slide | PDF

7.2. Experimental results

Preliminary measurement was focused on the PSF reconstruction. In this case, only one fiber was used for hologram recording. By paraxial optics, an approximate location of the image was determined and accurate intensity distribution in the vicinity of paraxial image point was created by reconstruction algorithms. Using the Fresnel transform, transverse intensity profiles were evaluated in a sequence of planes near the paraxial image plane. These data were used for image reconstruction in the planes (x′, z′) and (y′, z′) of the image space from which information about transverse and longitudinal size of the image spot can be obtained. The PSF that was reconstructed from the experimental data is compared with the calculated PSF in Fig. 2 for parameters f 0 = 200 mm, fd = 750 mm, Δ1 = 250 mm, and Δ2 = 600 mm.

In examining the lateral resolution, the visibility was experimentally determined for different transversal distances Δx of the observed point sources. The measurements were carried out for different ratios Δ2/fd and the experimental results were compared with theoretical predictions. In the measurements, the fiber faces were placed in the focal plane of the collimating lens and the lateral distance of fibers was successively changed using micrometer displacements. Holograms of two-point object were recorded on the CCD camera and then used for digital image reconstruction. The planes of sharp images were determined using the numerical procedure implemented in software for digital imaging. In separate transverse planes of the image space, the first and second order moments were calculated from the intensity profile and the standard deviations σx and σy evaluated. The longitudinal position where (σx2+σy2)1/2 becomes minimal was identified as the plane of best focus. In this plane, the two-point image was evaluated and the value of lateral visibility Vx defined by Eq. (53) determined. Visibility evaluated for transverse distance of point sources Δx, which vary from 0 to 0.14 mm is shown in Fig. 6(a). Experimental data were acquired in the system with parameters f 0 = 200 mm, fd = 750 mm, Δ1 = 250 mm and with varying distances Δ2 = 0.25 fd, 0.5 fd, 0.95 fd, 1.0 fd and 1.5 fd. Experimental results confirm the theoretical prediction that a transverse resolution of the FINCH imaging is limited by resolution of the collimating lens if Δ2fd. Figure 6(a) shows that for such positions of the CCD, the experimental visibility is close to the theoretical curve, which corresponds to a diffraction limited resolution of the collimating lens (dashed line in Fig. 6(a)). The visibility curves obtained for Δ2 = 0.25 fd and 0.5 fd show that the resolution of a collimating lens is substantially reduced when Δ2 < fd. This trend is also evident from Fig. 6(b) that shows the separation distance Δx corresponding to the visibility Vx = 0.8 in dependence on the ratio Δ2/fd. Slightly lower experimental resolution then theoretical limit can be attributed to imperfections of the optical set-up, but general trends are clearly reproduced. Improvement in lateral resolution achieved by changing the position of the CCD camera is illustrated in Fig. 7 for the separation distances of the point sources Δx = 25 μm and 65 μm and camera positions Δ2 = 1.2 fd and 0.4 fd.

 

Fig. 6 Experimental determination of the lateral visibility: (a) Dependence of visibility Vx on lateral separation distance Δx of two-point object for different parameters Δ2/fd. (b) Separation distance Δx corresponding to the visibility Vx = 0.8 in dependence on the ratio Δ2/fd.

Download Full Size | PPT Slide | PDF

 

Fig. 7 Transverse image spot reconstructed from holograms recorded for different lateral separation distances of a two-point source, and different positions of the CCD.

Download Full Size | PPT Slide | PDF

Analysis of the longitudinal resolution was carried out for two points located on the optical axis of collimating lens. The observed point sources were again realized by fibers and their longitudinal separation distance Δz was altered by precise mechanical displacements. Holograms were recorded on a CCD for each position of a two-point source and its images were created numerically using the Fresnel transform. Lateral intensity profiles were evaluated in a sequence of plains separated by a fine longitudinal step adapted to the depth of field of the system. In this way, the 3D intensity of two-point source was reconstructed and used to determine the longitudinal visibility Vz defined by Eq. (56). Dependence of the visibility Vz on the longitudinal separation distance of point sources Δz was found for different positions of the CCD given by Δ2 = fd ± 50 mm (Δ2/fd = 1.06 and 0.93), Δ2 = fd ± 375 mm (Δ2/fd = 1.5 and 0.5) and Δ2 = fd ± 550 mm (Δ2/fd = 1.73 and 0.26). The results are shown in Fig. 8(a). For each position of the CCD, the distance Δz for which the visibility drops to a value of Vz = 0.8 was determined from the dependence of Vz on Δz. Separation distances Δz for which visibility Vz drops to the specified level 0.8 have different values for different positions of the CCD. Their dependence on the ratio Δ2/fd is shown in Fig. 8(b). Figure 9 shows the intensity distribution, which was obtained by digital reconstruction of holographic records of two point sources located on the axis of the collimating lens. CCD camera was in the position Δ2 = 0.8 fd and the longitudinal distances of observed point sources were Δz = 2.5 mm and 3.5 mm, respectively. Experimental results again confirm the prediction that the best longitudinal resolution is reached to set Δ2fd. In this case, the longitudinal resolution is limited by the longitudinal resolution of the collimating lens. The larger discrepancy in the longitudinal resolution can be justified by the fact, that longitudinal magnification is much more sensitive to parameter inaccuracies than lateral magnification.

 

Fig. 8 Experimental determination of the longitudinal visibility: (a) Dependence of visibility Vz on longitudinal separation distance Δz of two-point object for different parameters Δ2/fd. (b) Separation distance Δz related to the longitudinal visibility Vz = 0.8 in dependence on the ratio Δ2/fd.

Download Full Size | PPT Slide | PDF

 

Fig. 9 Intensity distribution in the (x′, z′) plane reconstructed with different longitudinal separation distances Δz of a two-point source. Recording of holograms was carried out for the position of the CCD camera Δ2 = 0.8 fd.

Download Full Size | PPT Slide | PDF

At this point it is worth repeating that optimal transverse resolution for FINCH imaging is obtained for Δ2fd, hence optimum for both transversal and longitudal two-point resolution can be achieved for Δ2 equal or slightly larger than fd. At this position of detector, the wave generated by the modulator is in focus, so that detected interference patterns have a small size. In the experiment, the two-point image was successfully reconstructed even from records taken directly at distance Δ2 = 1.0 fd where transverse dimension of the interference pattern was close to diffraction limited Airy pattern. In this case, however, special care had to be taken to overcome technical limitation caused by large difference in irradiance from signal and reference wave and so multiple exposures from CCD had to be combined for each record to accumulate enough light from reference wave and not to cause overexposure from focused signal wave, at the same time.

8. Conclusions

The article presents a wave model of FINCH, which allows calculation of the PSF and subsequent analysis of lateral and longitudinal resolution of two-point object. For the first time, the diffraction limits of the lateral and longitudinal resolution achievable in the FINCH imaging are calculated and examined experimentally. The main results can be summarized as follows:

  • In the ray approximation, the relationships between geometrical parameters of the object and its digitally created image were found and used to determine the lateral and longitudinal magnification of the image.
  • Three-dimensional diffraction limited PSF was calculated and compared with the image spot reconstructed from experimental data acquired in the FINCH set-up for a point object.
  • Transverse and longitudinal resolution of FINCH imaging was examined theoretically and experimentally using the visibility function defined for a two-point source implemented by the LED coupled to single-mode fibers.
  • The transverse and longitudinal resolution of two-point object was investigated experimentally for different parameters of the set-up with very good agreement with theory.
  • It was verified that the transverse and longitudinal resolution of FINCH imaging is limited by the collimating optics and can be achieved only if distance Δ2 between the CCD camera and the SLM is equal or larger than focal length of the SLM lens, fd. We proved both theoretically and experimentally that when Δ2 is shorter than fd, the best two-point resolution is not reached and the transverse and longitudinal resolution provided by the collimating lens is significantly reduced.

Theoretical and experimental results presented in the article are applicable to estimation of the image quality of FINCH imaging achieved in experiment of given parameters. Complete mathematical treatment and discussion of coherence properties may be useful for the study of transfer functions of FINCH imaging. New findings on the longitudinal magnification and resolution can be valuable for imaging analysis of 3D objects.

Acknowledgments

This work was supported by the Czech Ministry of Education, Projects No. MSM6198959213 and MSM0021630508, the Czech Ministry of Industry and Trade, project No. FR-TI1/364, IGA project Modern optics and applications PrF 2010 005, and the Grant Agency of the Czech Republic, project No. 202/08/0590.

References and links

1. D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003). [CrossRef]   [PubMed]  

2. G. Indebetouw, A. El Maghnouji, and R. Foster, “Scanning holographic microscopy with transverse resolution exceeding the Rayleigh limit and extended depth of focus,” J. Opt. Soc. Am. A 22, 892–898 (2005). [CrossRef]  

3. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40, 2864–2870 (2001). [CrossRef]  

4. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional Fourier spectra of real existing objects,” Opt. Lett. 28, 2518–2520 (2003). [CrossRef]   [PubMed]  

5. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32, 912–914 (2007). [CrossRef]   [PubMed]  

6. J. Rosen and G. Brooker, “Fluorescence incoherent color holography,” Opt. Express 15, 2244–2250 (2007). [CrossRef]   [PubMed]  

7. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2, 190–195 (2008). [CrossRef]  

8. B. Katz and J. Rosen, “Super-resolution in incoherent optical imaging using synthetic aperture with Fresnel elements,” Opt. Express 18, 962–972 (2010). [CrossRef]   [PubMed]  

9. B. Katz and J. Rosen, “Could SAVE concept be applied for designating a new synthetic aperture telescope?,” Opt. Express 19, 4924–4936 (2011). [CrossRef]   [PubMed]  

10. B. Katz, D. Wulich, and J. Rosen, “Optimal noise suppression in Fresnel incoherent correlation holography (FINCH) configured for maximum imaging resolution,” Appl. Opt. 49, 5757–5763 (2010). [CrossRef]   [PubMed]  

11. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19, 5047–5062 (2011). [CrossRef]   [PubMed]  

12. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (J. Wiley, 1991). [CrossRef]  

13. S. Van Aert and D. Van Dyck, “Resolution of coherent and incoherent imaging systems reconsidered–classical criteria and a statistical alternative,” Opt. Express 14, 3830–3839 (2006). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
    [CrossRef] [PubMed]
  2. G. Indebetouw, A. El Maghnouji, and R. Foster, “Scanning holographic microscopy with transverse resolution exceeding the Rayleigh limit and extended depth of focus,” J. Opt. Soc. Am. A 22, 892–898 (2005).
    [CrossRef]
  3. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40, 2864–2870 (2001).
    [CrossRef]
  4. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional Fourier spectra of real existing objects,” Opt. Lett. 28, 2518–2520 (2003).
    [CrossRef] [PubMed]
  5. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32, 912–914 (2007).
    [CrossRef] [PubMed]
  6. J. Rosen and G. Brooker, “Fluorescence incoherent color holography,” Opt. Express 15, 2244–2250 (2007).
    [CrossRef] [PubMed]
  7. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2, 190–195 (2008).
    [CrossRef]
  8. B. Katz and J. Rosen, “Super-resolution in incoherent optical imaging using synthetic aperture with Fresnel elements,” Opt. Express 18, 962–972 (2010).
    [CrossRef] [PubMed]
  9. B. Katz and J. Rosen, “Could SAVE concept be applied for designating a new synthetic aperture telescope?,” Opt. Express 19, 4924–4936 (2011).
    [CrossRef] [PubMed]
  10. B. Katz, D. Wulich, and J. Rosen, “Optimal noise suppression in Fresnel incoherent correlation holography (FINCH) configured for maximum imaging resolution,” Appl. Opt. 49, 5757–5763 (2010).
    [CrossRef] [PubMed]
  11. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19, 5047–5062 (2011).
    [CrossRef] [PubMed]
  12. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (J. Wiley, 1991).
    [CrossRef]
  13. S. Van Aert and D. Van Dyck, “Resolution of coherent and incoherent imaging systems reconsidered–classical criteria and a statistical alternative,” Opt. Express 14, 3830–3839 (2006).
    [CrossRef] [PubMed]

2011 (2)

2010 (2)

2008 (1)

J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2, 190–195 (2008).
[CrossRef]

2007 (2)

2006 (1)

2005 (1)

2003 (2)

2001 (1)

Abookasis, D.

Allan, V. J.

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
[CrossRef] [PubMed]

Brooker, G.

El Maghnouji, A.

Foster, R.

Indebetouw, G.

Itoh, M.

Katz, B.

Li, Y.

Rosen, J.

Saleh, B. E. A.

B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (J. Wiley, 1991).
[CrossRef]

Sando, Y.

Siegel, N.

Stephens, D. J.

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
[CrossRef] [PubMed]

Teich, M. C.

B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (J. Wiley, 1991).
[CrossRef]

Van Aert, S.

Van Dyck, D.

Wang, V.

Wulich, D.

Yatagai, T.

Appl. Opt. (2)

J. Opt. Soc. Am. A (1)

Nat. Photonics (1)

J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2, 190–195 (2008).
[CrossRef]

Opt. Express (5)

Opt. Lett. (2)

Science (1)

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003).
[CrossRef] [PubMed]

Other (1)

B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (J. Wiley, 1991).
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1

Illustration of the basic principle of FINCH.

Fig. 2
Fig. 2

Comparison of 3D Point Spread Function of FINCH imaging obtained by (a) experiment and (b) simulation model for parameters f 0 = 200 mm, fd = 750 mm, Δ1 = 250 mm, and Δ2 = 600 mm.

Fig. 3
Fig. 3

Dependence of transverse resolution Δr and longitudinal resolution Δz on the ratio Δ2/fd for FINCH set-up using the collimating lens (fd = 750 mm, Δ1 = 250 mm, and f 0 = 100 mm, - - - , f 0 = 150 mm, —, f 0 = 200 mm, -.-.-).

Fig. 4
Fig. 4

Dependence of the transverse resolution Δr and the longitudinal resolution Δz on the ratio Δ2 /fd for lensless FINCH (fd = 750 mm, z 0 = -50 mm, - - - , z 0 = −75 mm, —, and z 0 = −100 mm, -.-.-).

Fig. 5
Fig. 5

Experimental set-up for measurement of transversal and longitudinal resolution of FINCH imaging.

Fig. 6
Fig. 6

Experimental determination of the lateral visibility: (a) Dependence of visibility Vx on lateral separation distance Δx of two-point object for different parameters Δ2/fd . (b) Separation distance Δx corresponding to the visibility Vx = 0.8 in dependence on the ratio Δ2/fd .

Fig. 7
Fig. 7

Transverse image spot reconstructed from holograms recorded for different lateral separation distances of a two-point source, and different positions of the CCD.

Fig. 8
Fig. 8

Experimental determination of the longitudinal visibility: (a) Dependence of visibility Vz on longitudinal separation distance Δz of two-point object for different parameters Δ2/fd . (b) Separation distance Δz related to the longitudinal visibility Vz = 0.8 in dependence on the ratio Δ2/fd .

Fig. 9
Fig. 9

Intensity distribution in the (x′, z′) plane reconstructed with different longitudinal separation distances Δz of a two-point source. Recording of holograms was carried out for the position of the CCD camera Δ2 = 0.8 fd .

Equations (58)

Equations on this page are rendered with MathJax. Learn more.

t j = a + b exp [ i ( θ ϑ j ) ] , j = 1 , 2 , 3 ,
U j = U r + U s j = a r exp ( i Φ r ) + a s exp [ i ( Φ s ϑ j ) ] ,
Φ j = k z j [ 1 + ( x c x j ) 2 + ( y c y j ) 2 2 z j 2 ] , j = r , s ,
I j = a r 2 + a s 2 + 2 a r a s cos ( Φ s Φ r ϑ j ) . j = 1 , 2 , 3 .
T = I 1 [ exp ( i ϑ 3 ) exp ( i ϑ 2 ) ] + I 2 [ exp ( i ϑ 1 ) exp ( i ϑ 3 ) ] + I 3 [ exp ( i ϑ 2 ) exp ( i ϑ 1 ) ] .
T = A exp [ i ( Φ s Φ r ) ] ,
T = A exp ( i Ω 0 ) exp { i k 2 f l [ ( x c X ) 2 + ( y c Y ) 2 ] } .
f l = ( 1 F ) [ ( 1 F ) f d Δ 2 ] ,
F = Δ 2 ( z 0 + f 0 ) z 0 f 0 Δ 1 ( z 0 + f 0 ) ,
X = x 0 m , Y = y 0 m ,
m = f 0 Δ 2 z 0 ( f 0 Δ 1 ) f 0 Δ 1 .
Ω 0 = k ( Δ 2 + f d F ) [ Δ 2 2 F f 0 2 F 2 ( z 0 + f 0 ) 2 ( x 0 2 + y 0 2 ) ] .
U ( r ) = Q ( x c , y c ) T ( x c , y c ) h ( x x c , y y c , z ) d x c d y c ,
U ( r ) = i A λ z exp ( i Γ ) Q ( x c , y c ) exp [ i k ( x c 2 + y c 2 2 ( 1 f l 1 z ) ] × exp { i 2 π [ x c X ¯ + y c Y ¯ ] } d x c d y c ,
Γ = Ω 0 k z + k X 2 + Y 2 2 f l k x i 2 + y i 2 2 z ,
X ¯ = 1 λ ( x z X f l ) , Y ¯ = 1 λ ( y z Y f l ) ,
U ( x , y , f l ) = i A λ f l exp ( i Γ ) δ ( X ¯ , Y ¯ ) .
d m d z 0 = ( Δ 1 f 0 ) m 2 f 0 Δ 2 .
α = f 0 2 [ 2 f d ( 1 F ) Δ 2 ] F 2 Δ 2 ( z 0 + f 0 ) 2 ,
f l F lim z 0 f 0 { f l } = f d Δ 2 ,
m F lim z 0 f 0 { m } = Δ 2 f 0 ,
α F lim z 0 f 0 { α } = Δ 2 2 f d f 0 m F .
f l L lim f 0 { f l } = ( 1 m L ) [ ( 1 m L ) f d Δ 2 ] ,
m L lim f 0 { m } = Δ 2 z 0 ,
α L lim f 0 { α } = [ 2 f d ( 1 m L ) Δ 2 ] m L 2 Δ 2 .
Q = exp ( x c 2 + y c 2 Δ ρ c 2 ) ,
Δ ρ c = min { ρ C C D , Δ ρ r , Δ ρ N } ,
U ( r ) = A exp ( i Ω 0 ) u 0 ( r ) ,
u 0 ( r ) = i π λ z q exp [ m 2 ( x 0 2 + y 0 2 ) γ i π ( x 2 + y 2 ) λ z ] × exp { π 2 q [ ( x λ z i m x 0 γ π ) 2 + ( y λ z i m y 0 γ π ) 2 ] } ,
γ = 1 Δ ρ c 2 i π λ z ,
q = 1 Δ ρ c 2 + i π λ ( 1 z 1 f l ) .
I N ( r ) = | U ( r ) | 2 | U ( x 0 m , y 0 m , f l ) | 2 .
I N ( r ) = exp ( 2 π 2 N A 2 r 2 λ 2 ) ,
Δ r = λ π N A .
D = | U ( 0 , 0 , f l + Δ z ) | 2 | U ( 0 , 0 , f l ) | 2 .
D = [ ( 1 + Δ z f l ) 2 + Δ z 2 N A 2 Δ r 2 ] 1 .
Δ z = ± λ π N A 2 1 D 1 .
Δ ρ r = ρ S L M f d ( f d Δ 2 ) ,
Δ ρ N = ( f d Δ 2 ) λ 2 Δ x C C D ,
2 Δ r = 2 λ π N A , 2 Δ z = 5 Δ r N A .
Δ r = max { Δ r 0 , f d Δ 2 Δ r 0 } ,
Δ z = f d 2 Δ 2 | Δ 2 2 f d | Δ z 0 ,
Δ r = λ | z 0 | f l L π ρ C C D Δ 2 .
Δ 2 = z 0 ( 1 + z 0 f d ) 1 / 2 .
Δ r = λ π N A ,
2 α L f l L Δ 2 f l L α L Δ 2 = 0.
T = A j = 1 2 exp ( i Ω j ) exp { i k 2 f l j [ ( x c X j ) 2 + ( y c Y j ) 2 ] } .
U T P ( r ) = A j = 1 2 exp ( i Ω j ) u j ( r ) .
I T P ( r ) = I 1 + I 2 + 2 I 1 I 2 cos ( Δ ϕ ) ,
I j ( r ) = | A | 2 u j ( r ) | 2 ,
Δ φ ( r ) = Ω 1 Ω 2 + κ 1 κ 2 ,
κ j = arctan { u j ( r ) } { u j ( r ) } , j = 1 , 2 .
V x = max { I x peak I x centre I x peak + I x centre , 0 } ,
I x peak = 1 2 { I T P [ m ( x 0 + Δ x / 2 ) , 0 , z ] + I T P [ m ( x 0 Δ x / 2 ) , 0 , z ] } ,
I x centre = I T P ( m x 0 , 0 , z ) .
V z = max { I z peak I z centre I z peak + I z centre , 0 } ,
I z peak = 1 2 { I T P [ 0 , 0 , α ( z 0 + Δ z / 2 ) ] + I T P [ 0 , 0 , α ( z 0 Δ z / 2 ) ] } ,
I z centre = I T P [ 0 , 0 , α z 0 ] .

Metrics