Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Interference probe ptychography for computational amplitude and phase microscopy

Open Access Open Access

Abstract

We have developed an approach to Fresnel domain ptychography in which the illumination consists of an interference pattern. This pattern is conveniently created by overlapping two coherent beams at an angle. Only the phase and orientation of the interferometric fringe pattern needs to be scanned to reconstruct a high-fidelity object image, which alleviates the requirements for accurate sample positioning and system stability. As such, the resulting imaging systems can be constructed in an extremely simple and robust way. Object images are reconstructed from recorded Fresnel diffraction data using a modified ptychographical iterative engine. We demonstrate the capabilities of this imaging system by recording images of various biological samples, demonstrating quantitative phase contrast as well as a spatial resolution better than 2.2 μm.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Lensless or computational imaging is an upcoming imaging method, in which numerical algorithms are used to replace (part of) the conventional optical components to achieve image formation in a microscope system. The central challenge in lensless imaging is the need to numerically retrieve a phase associated with a measured diffraction pattern, as the full electric field of a diffraction pattern needs to be known to numerically propagate the wave field back to the position where the object was located [1–4]. Various forms of lensless imaging have been introduced in recent years [5–10]. One of the most promising approaches is known as ptychography, which uses translational diversity to robustly retrieve the phase associated with measured diffraction patterns. The main implementation of ptychography is through spatial translation of the object through a spatially constrained beam profile [11]. This approach can typically retrieve both the fields associated with object and the illumination. While the initial version required high coherence and exact knowledge of the different illumination positions, more recent work has relaxed these requirements and enabled position correction [12, 13], as well as illumination with partially coherent beams and even speckle patterns [14–17]. However, such additions do come at the cost of increased computational complexity, or with the need for a higher degree of measurement diversity, thus increasing measurement time.

Recent work has shown that interference patterns are a viable tool for structured illumination coherent diffractive imaging (CDI) [18–20]. Interference pattern illumination is an interesting approach to introducing measurement diversity in CDI, as it can be readily implemented, and accurate scanning interference patterns over a large field-of-view is possible with relatively simple optical setups. In a practical realization, a sample is illuminated with two overlapping coherent beams at an angle which generate an interference pattern on the sample. The resulting diffraction pattern is measured directly on a camera located close to the sample, employing no further imaging optics. To ensure sufficient measurement diversity, the propagation distance between camera and sample [18] or the orientation of the fringes [19, 20] can be scanned to collect a series of probe-response measurements. From this measurement set, an object is reconstructed using a phase retrieval algorithm.

In this work, we demonstrate a new approach to interference-based diffractive imaging, shifting the illumination fringes over the sample as well as changing the orientation to a few different orientations, but keeping the propagation distance constant. This setup minimizes the amount of mechanical movement needed for data acquisition, and does not require interferometric stability on timescales longer than a single camera frame acquisition. This enables us to create a very compact setup. We also show - for the first time to our knowledge - that a dataset consisting of only interference patterns and the resulting diffraction patterns can be used for a ptychographical reconstruction using the ptychographical iterative engine (PIE), and we show that some of the common extensions to regular ptychography, such as superresolution-ptychography (SR-PIE) [21] and translation position correction [12] can be employed as well. Furthermore, we systematically investigate the amount of measurement diversity needed for accurate image reconstruction, and find that the amount of measurements can be kept quite limited. We demonstrate this technique using a USAF target and two biological specimen, a mosquito wing containing many microscopic features, and a thick biological nematode (Caenorhabditis Elegans) sample, and obtain high-resolution amplitude and quantitative phase images for each of them.

2. Optical setup

While there are numerous ways of creating an interference pattern, we chose to employ a transmission-grating-based setup, because of its simple and robust nature. An overview of the setup is shown in Fig. 1. A grating is imaged onto a sample whilst suppressing the zero order transmission, as indicated. To enable changing the orientation of the fringes, the rotation of the grating can be controlled. It should be noted that neither the optical quality of the imaging lens, nor the quality of the grating are particularly important for the imaging performance. The magnification of the lens-based system was chosen such that maxima of the fringe pattern are separated by roughly four pixels on the camera. In this proof-of-concept experiment, adjusting the path length difference of the interferometer (which determines the position of the maxima of the fringes) was performed by a mechanical disturbance to the setup during image acquisition. Even though this does not allow deterministic control of the path length differences, these parameters can be extracted from the measured diffraction patterns a posteriori.

 figure: Fig. 1

Fig. 1 Optical setup for interference probe ptychography. A laser (450 nm, Thorlabs LP450-SF15 or 520 nm, Thorlabs LP520-SF15) illuminates a grating (120 lp/mm, Edmund optics, #66-342), which is imaged onto the sample using a 4f-imaging system (f1=25 mm, f2=100mm), while blocking the zero order diffraction. The camera (IDS UI-5482LE-M) is positioned a short distance (approximately 1 mm) behind the sample. The illumination beams are called P(0) and P(1), at the sample plane. The grating is mounted on a rotatable mount to allow easy changing of the orientation of the fringes. The sample is mounted on a transition stage in order to repeatedly remove and insert it for the reference measurement. In order to shift the illumination pattern over the sample, the setup is mechanically disturbed. The laser is triggered to the camera to minimize motion blurring.

Download Full Size | PDF

A measurement consists of illuminating the object with fringe patterns under a limited number of orientations, while scanning the pathlength differences, and recording the resulting diffraction patterns. As our setup does not support deterministic control over the pathlength differences, we record 50 camera frames even though we only require about four for every orientation to do the reconstruction. From this series of measurements, the relevant frames are extracted in the processing routine based on minimal overlap, as described in the Appendix, Sec. A.1. Additionally, a reference measurement is taken for each grating orientation while the sample is removed, consisting of the individual beams and a single interference pattern. This is done immediately after the sample measurement, but it could also be done as a separate reference measurement. To allow straightforward insertion and removal of the object, it was mounted on a translation stage. A typical diffraction image is shown in Fig. 3(a).

3. Numerical image reconstruction

3.1. Overview of the reconstruction algorithm

The reconstruction process relies mainly on the standard ptychographical iterative engine (PIE) algorithm [22], with some changes applied to speed up convergence as described in [23] by Maiden et al. As with any ptychographical experiment, accurate knowledge of the illumination function P is very important. As this is slightly more involved in our setup, it will be discussed extensively in the next section, and for now we presume that electric field illuminating the sample is known for all measurements. Throughout this paper, all variables in capitals represent 2D functions, where X is a field at the object plane and represents a field at the camera plane. Lowercase variables represent (complex) scalars. The only exception is the measured intensity, which is represented by I.

The measurement set that we acquired consists of a series of diffraction patterns Ii,j for a few different pathlength differences and a few different grating orientations, where index i indicates the grating orientation and index j indicates the measurement number. Furthermore, using a calibration measurement, we have access to the electric field of both illumination beams Pi(0), Pi(1) for every fringe orientation and their relative pathlength difference δ for every measurement i, j. The guess of the exit wave at the object location Ψg,n,i,j is computed for both beams separately using the current guess of the object Og,n.

Ψg,n,i,j=Ψg,n,i,j(0)+Ψg,n,i,j(1)
Ψg,n,i,j(0)=Pi(0)Og,nexp(+ιδi,j/2)
Ψg,n,i,j(1)=Pi(1)Og,nexp(ιδi,j/2)
where the subscript g stands for guess, n is the iteration number, and i, j are grating orientation and measurement number, respectively. P(0) is the electric field of the first beam, and P(1) is the electric field of the second beam at the object location. Og,n is the current object guess. δi,j is determined by path length difference between the two arms of the interferometer. The two beams are propagated to the camera:
Ψ˜g,n,i,j=Ψ˜g,n,i,j(0)+Ψ˜g,n,i,j(1)
Ψ˜g,n,i,j(0)=𝒫d[Ψg,n,i,j(0)]
Ψ˜g,n,i,j(1)=𝒫d[Ψg,n,i,j(1)]
Using operator notation where 𝒫d [·] stands for Fresnel propagation from the object to the camera. In regular ptychography, the modulus contraint is now applied, scaling the amplitude of every pixel in Ψ̃ g,n,i,j with the measured amplitude Ii,j and the resulting field is numerically propagated back to the object. However, this presumes a laser with perfect coherence. Our setup is sensitive to a path length difference between the two arms of the interferometer and therefore it is important to take into account the limited mutual coherence of the two probe beams. Otherwise, the fringe contrast on the camera will be overestimated and this leads to a large overestimation of the electric field in areas of the probe with low intensity, resulting in artefacts in the reconstruction. However, it is possible to extract the mutual intensity Ci from the same reference measurement as was used to characterize both of the probes. To take this effect into account, a forward model is employed to compute the expected intensity profile on the camera, and a similar update as in [16, Eq. (5)] was implemented. In general, the modulus contraint for a function Ψ̃ can be written as:
Ψ˜=Ψ˜ImeasuredI˜expected
where Imeasured is the measured intensity on the camera, and Ĩexpected is the expected camera intensity. To account for imperfect incoherence we compute Ĩexpected in the following way:
J˜g,n,i,j=|Ψ˜g,n,i,j(0)|2+|Ψ˜g,n,i,j(1)|2+2γCi(Ψ˜g,n,i,j(0)Ψ˜g,n,i,j(1)*)
which explicitly writes out the expected intensity of two beams with imperfect mutual coherence. Here, g,n,i,j is the expected intensity on the camera and Ci is the mutual intensity function as extracted from a reference measurement for fringe orientation i. In a perfectly coherent system, C should be unity but in our measurements it is usually on the order of 0.85–0.9. γ is a constant which allows to scale the mutual intensity function and is usually kept at one. Updating the object estimate then proceeds as follows:
Ψc,n,i,j=𝒫d[Ψ˜g,n,i,jIi,jJ˜g,n,i,j]
where 𝒫d [·] stands for Fresnel propagation from the camera to the object, Ψc,n,i,j is the corrected exit wavefront. Now the object still needs to be updated with the additional information present in Ψc,n,i,j. However, Ψc,n,i,j represents an updated estimate to O × P. An intuitive approach to estimate the object would be to divide out the probe, but as the illumination pattern contains a large number of zeroes this is not a fruitful approach. However, there are numerous ways to do this, and we choose the rPIE update function [23]:
Pi,j=Pi(0)exp(+ιδi,j/2)+Pi(1)exp(ιδi,j/2)
Oc,n=Og,n+Pi,j*(1α)|Pi,j|2+α|Pi,j|max(Ψc,n,i,jΨg,n,i,j).
where Oc,n is a corrected object estimate, Pi,j* the conjugated expected field of the combined probes, and α is a regularisation parameter. In the original rPIE implementation, α was set at 0.15. In our case, the entire area will be very evenly illuminated and as such the first term in the denominator, which scales the intensity of areas with a low intensity is not particularly valuable. Therefore we chose to set α slightly higher, at 0.5.

In classical PIE, the object is updated with this guess and the next diffraction pattern is taken. However, as there is a lot of spatial overlap in between our probes, it turned out that computing the update for all fringe positions of one orientation and averaging them leads to more robust results, especially when assessing a limited number of fringe orientations. We will call this averaging approach meanPIE, and it is similar to a parallel PIE [24] algorithm. Averaging all of the updates for all orientations resulted in a very stable image, but with artefacts for more complicated structures. An overview of results for different update methods is given in Appendix C. After iterating over all of the orientations, we arrive at a new guess of the object, Og,n+1.

3.2. Determination of the reconstruction parameters

For a proper ptychographical reconstruction, multiple parameters need to be known. These are the back propagation distance d, the electric field of all the probes Pi(0,1) the relative pathlength differences δi,j, and the mutual coherence functions Ci.

As mentioned in the preceding section, an accurate estimation of the illuminating fields (Pi(0) and Pi(1)) at the object plane is required. In standard ptychography, the probe is moved with respect to the object and its contribution can be separated from the object, enabling to reconstruct the probe along with the object [24,25]. As the probe does not move with respect to the object, it is much harder to distinguish between feedback on the probe and feedback on the object in our case. Luckily, the process by which the probes are formed is well known and enables us to create a very accurate prediction of the illumination pattern. Furthermore, the path length difference between the probes has to be estimated as this cannot be controlled during the measurement.

3.2.1. Reconstructing probe beams

From a reference measurement, consisting of the intensity of both individual beams and a measurement of the interference pattern between the two reference beams, the spatial phase profile corresponding to the difference in phase curvature Φi = Φi(x, y) between the two beams is reconstructed using off-axis holography [26]. Other methods, such as principal component analysis [27] could be employed as well. This phase difference contains a large linear phase slant corresponding to the angle between the two beams, and a nonlinear part corresponding to grating imperfections or imperfections in the imaging system. To make sure that the object is propagated back perpendicular to the camera and not in the direction of one of the two beams, the linear part of the phase slant is divided over the two beams. Additionally the fringe contrast Ci is extracted. Afterwards, we will need an additional alignment procedure to align the probes of different orientations, which is why we call this intermediate probe estimation instead of . The expected field on the camera of the two probes for every orientation can be written as:

Q˜i(0)=Iir(0)exp(ι2(kxx+kyy))
Q˜i(1)=Iir(1)exp(ι2(kxx+kyy)+ιΦi).
where represents the electric field on the camera. kx and ky are the average direction of the phase map, found by locating the maximum of the FFT of exp (ıΦ). Iir(0,1) are the measured intensity profiles on the camera of reference beams 0 and 1, with the sample removed.

3.2.2. Automated retrieval of the focus position

As an optional step in the measurement preparation procedure, our method lends itself very naturally for a form of “autofocusing”, where the camera-sample distance is found automatically based on a parallax measurement. If in addition to the interference patterns, the intensity images of the separate beams illuminating the sample have been retrieved, the propagation distance can be estimated by propagating back both beams under their appropriate angles. The optimal back-propagation distance is found when the two images overlap. The refocused images are computed as:

Di(0)(z)=|𝒫z[(Ii(0)Iir(0))]|2
Di(1)(z)=|𝒫z[(Ii(1)Iir(1))e+ιΦi]|2
Di(0) can be seen as the object estimate of the individual beams, suffering from a twin image. When the back-propagation distance is found, Di(0) and Di(1) overlap, and the l2-norm of the difference will be minimal:
di=argminzDi(0)(z)Di(1)(z)2
where argminz[·] returns the z for which [·] is minimal. In principle, the back-propagation distance should be the same for all grating orientations. The optimal di is retrieved for all grating angles and the mean distance d is used. Typically, the relative error in the back-propagation distance is on the order of 4%. If the probe beams have a different intensity, the normalized crosscorrelation between Di(0)(z) and Di(1)(z) could be used as well. Even though this is a convenient technique, it requires an additional measuring step in which the diffraction patterns of the sample illuminated by both individual probes are measured.

3.2.3. Aligning measurements sets of different fringe orientations

Taking a reference measurement requires insertion and removal of the sample. Due to instabilities in the setup, the object might not move back to exactly the same location. Additionally, the lenses f1 and f2 might suffer from aberrations, and rotating the grating might lead to a rotation of the average direction of the incident beams. Lastly, the average direction of ky is rounded to the nearest integer FFT bin, leading to slight roundoff errors. To estimate this effect, we computed Di(d)=Di(0)(d)+Di(1)(d) as an estimate for the object after retrieval for all grating orientations i. We then compute the optimal translation of all measurements to make them overlap with the central fringe orientation, using FFT-based phase correlation on the gradient of the images using a DFT based sub-pixel registration method [18,28]. Typically, the estimated object shift is on the order of a few pixels, as shown in Fig. 2(a).

 figure: Fig. 2

Fig. 2 (a): estimated object shifts for probe orientations 1–8 of the USAF target. Most of the shifts are in the vertical direction, indicating that the object was slowly drifting down. (b): Geometrical indication of tilting a probe in order to shift the location of the reconstructed object without affecting the measured intensity patterns. If for a particular orientation i the initial back propagated image is shifted by Δx with respect to the required position, the overall angle of the two probes Pi(0), Pi(1) is tilted by angle −θ, such that after a new back propagation using Eq. (15), the object estimate is shifted to the desired position.

Download Full Size | PDF

One way of aligning the different measurements with respect to each other would be to resample the measured diffraction patterns on the camera. However, as the camera records an interference pattern, it contains modulations at relatively high spatial frequencies and this might lead to sampling artefacts if the required shift is not an entire pixel. Therefore we numerically tilt the probe in such a way that the expected intensity on the camera does not have to be altered. The procedure is indicated in Fig. 2(b).

Using a basic geometrical argument, we derive the required additional angle that needs to be applied to Qi to retrieve the object at a shifted location Δxi in 1D, after which the same procedure can be applied for the 2D case. Both probe beams need to be tilted by an angle θ defined as

tanθ=Δxipd=Y
Y=Δxip/d,
where Δxi is the required shift in pixels for the object retrieved at angle i, the pixel size is defined by p and the back-propagation distance is given by d. To tilt a wavefront by angle θ, the beam needs to be multiplied with a linear phase slant of steepness T(θ):
T(θ)=2πλtanθ.
Inserting the required phase slant from Eq. (17) leads to
ΔkΔxi(x)=2πλp2Δxidx2N
where ΔkΔxi(x) is the phase retardation that needs to be applied in pixel number x, and N is the number of pixels. This slant is added to the probe estimate i at the location of the camera. The estimate of the probe at the object location is now given by propagating the corrected probe estimates back to the object location.
Pi(0)=𝒫d[Q˜i(0)exp(ι(ΔkΔyi(y)+ΔkΔxi(x)))]
Pi(1)=𝒫d[Q˜i(1)exp(ι(ΔkΔyi(y)+ΔkΔxi(x)))]
where ΔkΔy is an equivalent term to Eq. (20) in the y-direction.

Even after this alignment, there might be a remaining misalignment at the sub-pixel level between probes of different orientations. Therefore, after computing the object update for all measurements j in a particular orientation i, an iterative position update procedure similar to [12] is used. After all measurements of a particular probe orientation have been processed, the new object estimate and the old object estimate are aligned using 2D crosscorrelation in the Fourier domain. The average probe direction is then updated using Eqs. (21)(22). Related techniques based on e.g. intensity gradients [13] can be used as well. Additional details on the alignment optimization procedure can be found in Appendix B.

3.2.4. Probe pattern phase determination

After a reasonable estimate of the individual probe beams has been obtained, the path length difference between the two arms, which determines the position of the maxima of the probe interference pattern, still has to be found for all images. As visible in Fig. 3(a), the individual fringes are visible in the measured diffraction patterns. However, the position of the fringes will in general be shifted because of the interaction with the probe. Therefore, an accurate estimate of the path length difference requires an accurate description of the object. Therefore, the path length difference estimates are refined after every ptychographical iteration.

 figure: Fig. 3

Fig. 3 (a1–a3) Measured diffraction patterns for different path length differences (a1–a2) and different orientations (a3). (b) Back-focussed image of group 7.3–7.6, computed using a single measured diffraction pattern 𝒫d[I(0)], showing artefacts created by the twin image. (c) Reconstructed intensity without taking into account the forward model as defined in Eq. (8). The area shown in a1-3 is roughly indicated by the red dashed area. (d) Reconstructed intensity including the forward model to account for imperfect mutual coherence.

Download Full Size | PDF

To estimate the path length difference between the two probes, we optimize the resemblance between the measured intensity on the camera and the expected intensity given by the current estimate of the object. As only the modulated part is dependent on the path length difference, the background caused by the contributions from the individual beams Ii,j is subtracted using the current estimate of these background contributions.

Ii,jmod=Ii,j|Ψ˜i(0)|2+|Ψ˜i(1)|2
Alternatively, the individual measured beams (Ii(0), Ii(1)) can be used if they are available. For a particular measured diffraction pattern Ii,j and probes Pi(0) and Pi(1), the pathlength difference δi,j can be optimized as follows:
δi,j=argmaxδ(Fi,j(δ))
fi,j(δ)=Ki(δ)|Ii,jmod
Ki(δ)=2Ci(Ψ˜i(0)Ψ˜i(1)*expιδ)
where Ki(δ) is an estimate of the modulated part of the diffraction pattern, fi,j(δ) computes the overlap between the modeled intensity and the modulated part of the intensity using the dot product, and ℜ means taking the real part.

A further simplification is possible, as fi,j(δ) is a sinusoidal function. For every pixel in Ki(δ)(x, y), the expected intensity can be written as a sinusoidal function with an unknown phase δ(x, y), which is dependent on the phase difference between the two estimates of the camera field for that particular pixel and an overall path length difference. As fi,j(δ) is essentially a weighted average of all individual (x, y) positions, it will be sinusoidal as well for any reasonable estimate of the camera exit waves Ψ˜i(0) and Ψ˜i(1). Finding the optimal δ is then reduced to finding the maximum of a sinusoidal function with a known period. Using the definition of a discrete Fourier transform, the optimal path length difference can be computed using

δi,j=Arg(1Ni=0Nfi,j(δ)expιδ).
where Arg [·] returns the argument of a complex phasor. Similar techniques are also used for the estimation of fields exiting complex media [29,30] and digital holography [2], and it is similar to lock-in techniques. For an accurate estimate, N needs to be at least four, but we chose N = 8 for a slighly more robust result. In practice it turns out that this gives an accurate estimate of the phase difference after the first few iterations of the ptychography algorithm. Rearranging Eq. (25), the estimated phase can be extracted for all frames of a particular fringe orientation in parallel. This is done after processing every full ptychographic iteration. For the first estimate, a random complex initial guess of the object is used.

3.2.5. Fringe frequency adaptation

From the methods described in the previous subsections, in principle all the parameters required for reconstruction can be determined. One additional optimization step, which is not strictly necessary but is found to aid convergence is the adaptation of the probe fringe frequency.

Inserting the sample alters the optical path length between the grating and the camera. While this is a rather small path length difference, it may alter the fringe frequency of the interferogram if the probe beams are not perfectly collimated. Furthermore, in between the acquisition of the reference measurements and the sample measurements for a particular orientation, the grating might rotate by a tiny amount. Even though both of these effects are subtle, the algorithm converges faster if their influence is characterised once before the retrieval routine is started. If these effects are not taken into account, a quadratic phase term may appear in the object, corresponding to the additional path length.

To properly deal with these effects before reconstruction, the measured probe patterns are fitted to Eq. (26) while allowing for an additional linear phase term to both probe beams with opposite signs, leading to a slight change of the fringe frequency.

4. Results

4.1. Performance assessment

To demonstrate the proposed method and characterize the attainable resolution, a USAF target has been illuminated with laser light at a wavelength of 450 nm, and the resulting diffraction is recorded on a camera with a pixel pitch of 2.2 μm. Nine different fringe orientations are used, in steps of 10 degrees. Four different path length differences are selected for every fringe orientation. Typical measured diffraction patterns are shown in Fig. 3(a1)–3(a3), and an individual directly back-focussed image is shown in Fig. 3(b). The reconstructed intensity is shown in Fig. 3(d). The image is sharp and the ptychographic reconstruction manages to completely suppress the twin image visible in Fig. 3(c). Some variation in the brightness level of the object is caused by imperfect knowledge of the illumination function. In this case, special care is taken to optimize the modulation contrast of the probe beams. This modulation contrast Ci is about 91±3%. Although this might appear to be so close to unity that the forward model defined in Eq. (8) might not be crucial, when the measured modulation contrast is not taken into account the reconstructed intensity is found to suffer from artefacts. This occurs specifically in sparse areas of the object, as shown in Fig. 3(c).

To assess the attainable resolution, the area indicated in red in Fig. 3(d) is magnified in Fig. 4(a). The smallest features that are resolvable in this image are roughly in group 7, element 4, with a width of about 2.7 μm. However, because the width of the bars is similar to the size of a single pixel on the camera (2.2 μm), the apparent resolution may be limited by the finite pixel size. To investigate the resolution in more detail, we employ Eqs. (21) and (22) to shift the location of the retrieved object by numerically tilting the incidence angle of all probes by the same amount. The tilt was taken such that the object was only shifted by a fraction of a pixel. We observe a large effect on the width of the smaller bars and which features can be resolved, Fig. 4(b)–4(d), which is a confirmation that the image quality suffers from pixel size effects. Therefore, the algorithm does not yet take full advantage of the information that is present in the measured diffraction patterns, as is to be expected given the fact that the geometrical detection NA of up to 0.75 should give rise to a resolution well below the pixel pitch.

 figure: Fig. 4

Fig. 4 (a–d) Intensity reconstruction of group 7 from a retrieved USAF sample, for a reconstruction with a numerical wavefront tilt corresponding to an object shift of 0, 0.25, 0.50, and 0.75 pixel. The smallest features that can be resolved are dependent on the numerical shift, even though the input data is identical. Reconstructing the data using MSR-PIE (e), reveals that the bars in group 7.6 with a width of 2.2 μm can now be separated.

Download Full Size | PDF

4.2. Achieving sub-pixel resolution: the mean-SR-PIE algorithm

For collimated illumination, the ultimate resolution achievable in the Fresnel domain is determined by the pixel pitch of the camera. However, the type of illumination that we employed lends itself very well to subpixel localisation of the position of the fringes [31,32]. Also, the illumination pattern closely resembles the illumination pattern used in white light structured illumination microscopy, where a periodic illumination is employed to gain op to a factor two in resolution based on the aliasing of higher frequency features [33, 34]. The algorithm that we employ is adopted from the subpixel–PIE, or sPIE algorithm, and is closely related to subpixel ptychography in the farfield as described in [21] and [35, Chapter 4] by Maiden et al and Batey et al. To overcome the limitations imposed by the finite pixel size on the camera, every camera pixel is divided into 2 × 2 sub-pixels. Instead of constraining the intensity of every individual sub-pixel using Eq. (8), another forward model is employed to compute the expected camera intensity, by incoherently adding the intensities of all 2×2 sub-pixels. Afterwards the relative intensities of the sub-pixels are scaled by the required factor to ensure that the estimated camera intensity matches the measured camera intensity. However, the relative ratio of the intensity within the sub-pixels is allowed to be almost completely free. The expected camera intensity is computed as:

J˜g,n,i,j(x,y)=J˜g,n,i,j(2x+0,2y+0)+J˜g,n,i,j(2x+0,2y+1)+J˜g,n,i,j(2x+1,2y+0)+J˜g,n,i,j(2x+1,2y+1)
where J̃′g,n,i,j (x, y) is the expected intensity on the camera, consisting of the incoherent addition of the fields on the sub-pixels. It will have the same dimensions as the original measured diffraction pattern. Similar to previous implementations [21], the highest frequencies in the resulting image have been suppressed to limit noise, for which we use an eighth-order super-Gaussian filter with a width of 95% of image. Additionally, sometimes the reconstruction may improve by choosing γ slightly lower than one. This feature is not used for Fig. 4, but has been taken into account in the data sets discussed in the following sections. When using the mean-SR-PIE algorithm on the same data, the smallest features in group 7 can be easily resolved, corresponding to a resolution of 2.19 μm, as can be seen in Fig. 4(e). Increasing the oversampling condition to more than two did not lead to an increase in resolution.

4.3. Imaging complex biological samples: mosquito wing

To demonstrate the feasibility of the method for larger and more complicated samples, we proceed with a measurement of an entire mosquito (Culcidae) wing on a microscope slide submerged in water. Nine different orientations were scanned with a spacing of 20°, and four path length differences were selected from every image. The back-propagation distance is automatically retrieved at 1.053±0.002 mm. In these measurements only a single lens was used with a focal distance of 25 mm and a grating with a pitch of 630 lp/mm, arranged to provide with fringes at a spatial resolution of roughly four pixels on the camera. The reference measurement employed the empty area next to the mosquito wing, providing a clean view of the probe interference pattern. In order to minimize multiple reflections from the cover glass, the laser power was tuned down to reduce the coherence length. As a result, Ci was typically as low as 62.5±3.6%. The mean-SR-PIE algorithm was used to reconstruct the object, using 50 ptychographic iterations of the algorithm. This approach results in a high-quality reconstruction of both object field and intensity, which are shown in Fig. 5. In the phase reconstruction, it is apparent that close the wing has a varying optical thickness close to the veins. The algorithm successfully reconstructs tiny feathers on the edge of the mosquito wing, and even indicates that there are tiny features on the wing structure, which are also visible under a classical intensity microscope. In the central part of the mosquito wing, a vein is visible that seems to gradually disappear. However, in the phase contrast image, it is quite apparent that it continues. This is confirmed by the miscroscope image, where it is hard to track the vein but it remains visible. The different colors of the wing in the microscope image are caused by thin-film interference. In empty areas of the cells, very small feathers are present, which are just about resolved in the reconstruction, as shown in the inset in Fig. 5(c1–c3). The reconstructed orientation of the hairs matches the orientation seen in a classical microscope.

 figure: Fig. 5

Fig. 5 (a) Amplitude-phase reconstruction of a mosquito wing, color represents phase as indicated in the lower left corner. (b) Amplitude of reconstructed field. (c1) A detail of the edge of the wing shows that the optical thickness increases from the edge to the center of the cell. (c2) Amplitude contrast closely resembles the smallest features visible with a microscope at similar resolution (c3). (d1) A thin vein in the center (indicated by the white arrow) can easily be tracked with the amplitude-phase reconstruction but is hardly visible in the amplitude reconstruction (d2) or under the microscope (d3). The structure within each cell is caused by tiny hairs and is properly reconstructed as shown in the green inset.

Download Full Size | PDF

4.4. Live animal imaging: C. elegans

The mosquito wing lends itself very well to ptychographical imaging because it is a very thin sample. To show that our system is capable of imaging thicker samples, we also image a biological specimen of living anesthesised C. elegans, with a typical diameter of 35 μm, suspended in a thick layer of agar. The resulting reconstructed object images are shown in Fig. 6. The internal structure of the nematode is too small to be imaged effectively with the standard PIE routine. Again, using the mean-SR-PIE leads to restoration of image features that are also visible in the microscope image. Especially the edge of the worm can suddenly be resolved much more clearly, and details can be seen in the uterus that were not there before and which seem to match a conventional bright-field microscope image (Fig. 6(c)). However, the microscope image is not fully identical to the reconstructed results. This is mainly because of the optical sectioning intrinsic to incoherent bright-field detection, which is more challenging in our setup based on on coherent illumination. Instead, the reconstructed intensity resembles an overall attenuation throughout the entire worm.

 figure: Fig. 6

Fig. 6 (a) Reconstructed amplitude image of an anesthesized C. elegans worm, with three insets, showing the edge of the worm (I), eggs (II) and the uterus (III). While the uterus can be recognized, most of the other features are not resolved. (b) Same dataset reconstructed with the s-PIE algorithm. The increase in resolution allows to see that the edge of the worm consists of two layers (I). Some egg-like features are visible (II), and more detail can be seen in the uterus. (c) Conventional bright-field microscope image of the same worm. In order to enhance the contrast in dark parts of the worm, the amplitude is shown instead of the intensity.

Download Full Size | PDF

5. Discussion and conclusions

We have demonstrated a high-resolution computational imaging system based on ptychography with interference pattern illumination. The optical hardware involved is straightforward to implement, requires no careful alignment, yet the microscope is able to achieve quantitative phase contrast with a high resolution and a fast convergence rate. Moreover, experimental variations such as sample drift can be compensated during the post-processing of the measurements.

It would clearly be advantageous to remove the need to measure the individual beams Ii(0) and Ii(1) separately, as it would allow more straightforward data acquisition and measurement automation. In the current preprocessing routine this is not possible. However, the ptychographical reconstruction procedure itself does not rely on these individual beam images, and it should be possible to obtain the calibration measurements without relying on the individual beams. Currently, the individual images are required for auto-focusing and obtaining the object shifts for different grating orientations.

The autofocusing method can be extended not to require the individual diffraction images. An estimate of the incoherent background can be obtained by looking at the largest singular vector of all measured diffraction patterns for a particular orientation i. This will closely resemble Ii(0)+Ii(1), leading to an image with two displaced objects. From the internal symmetry in this image, the back-propagation distance could be extracted too. In order to determine the average object shifts between measurement sets at different orientations, the meanPIE algorithm can be run for only a few iterations using only patterns corresponding to a single grating orientation, and the object estimates can be used to determine the shifts. These enhancements will mainly benefit a setup where the probe beams can be turned on and off repeatedly, for instance when using a spatial light modulator to create the probe beams.

A. Appendix

A.1. Selecting phase-shifted measurements

The limited stability of the measurement system prevented accurate control over the phase of the interference pattern in these experiments. Therefore, the camera records 50 frames with an a priori unknown fringe phase (optical path length difference between the two beams) for every fringe orientation. Only two to four out of these 50 images are required for the final analysis, provided that images at the correct phases are selected, such that each area of the sample is illuminated by least one image. Also, if a new intensity image can be written as the linear combination of two measurements that are already included in the analysis, it will not provide with useful additional information that can be obtained from the new measurement.

The fringes in the illumination beam are visible in the measured diffraction patterns. Therefore, one way of selecting the measurements would be to extract the overall path length difference δ for every individual measurement and select the measurements that are closest to an equidistant set ranging from 0 to 2π. However, this is computationally expensive and the due to vibrations the exact fringe spacing and orientation might vary slightly. Instead, we employ a method that aims to include new measurements based on minimal overlap with linear combinations of already included measurements. It is essentially a modified version of the Gram-Schmidt algorithm used to orthogonalize a matrix. A first image I0 is selected based on minimal cross-correlation with all of the other images. The other images are orthogonalized with respect to this measurement by subtracting I0 using the Gram-Schmidt procedure:

In=InIn|I0In|InI0
where In is the intensity of the n-th measurement, and 〈In|I0〉 is the dot product of images In and I0 both represented as a one-dimensional vector. After this operation, I′n contains all the information in In that cannot be expressed by I0. Therefore, the index n of the next measurement which gives the optimal amount of additional information is the one with the highest l2 norm:
nnext=argmaxn(In2)
where nnext is the index of the next frame to be included in the analysis. The procedure is repeated to get the desired number of measurements, replacing In with I′n in Eq. (29) after every iteration.

In principle, a cosine-modulated interference pattern with an a priori unknown phase can be expressed using only two signals, a sine and a cosine [27], provided that the background is measured as well. Therefore the minimal number of patterns would be two if we can select two measurements that are exactly π/4 out of phase. However, for improved robustness we choose to include four patterns in the analysis. We do observe that typically the first two images that are selected are about π/4 out of phase.

One way of investigating the amount of additional information in the measurements data is to look at the norm of nIn after including every new measurement, shown in Fig. 7. As the background images were also measured, the intensity of beam 1 and beam 2 can be subtracted from I before the analysis. However, in both cases it seems that after including three images most of the information is present and adding more images does not significantly improve the image reconstruction.

 figure: Fig. 7

Fig. 7 l2 norm of the remaining patterns in I′n after selecting npat measurements using the described procedure, for all orientations of the dataset used to reconstruct the USAF sample. For the orange dataset, the background patterns Iir(0,1) have been subtracted, indicating that typically only two patterns are required to extract most of the information. For the blue dataset, the raw measurements have been used without background subtraction, which seems to give slightly better results when more than three measurements have been included.

Download Full Size | PDF

B. Iterative alignment optimization

Even after the initial alignment as described in the main text, the objects might be shifted with respect to each other by a fraction of a pixel size. We adopted a straightforward alignment procedure that is typically used in Fourier ptychography to correct for small shifts, but other methods could be used as well [12,13].

After processing all camera images of a particular fringe orientation, if the object was misplaced, the object estimate will have shifted a little bit in the direction of the misplaced object. Our algorithm relies on detecting this shift and numerically tilting the probe in order to compensate this difference. In similar methods, this stepsize is magnified, but in our case the feedback is strong enough that this did not help. The difference with the object before processing a particular fringe orientation is computed as Δ = Oc,nOg,n. A new estimate of the object is computed by amplifying this difference:

O=Og,n+5Δ
This magnification is not strictly necessary, but it helps convergence. Then the optimal shift is found between Og,n and O′ aligning the objects using FFT-based crosscorrelation using subpixel accuracy [28] as implemented in the scikit-image python package. Using Eq. (20), all probes for this grating orientation are tilted in order to optimize the overlap. Even though this algorithm will not work for large differences, it does converge for small angles.

C. Requirements on the number of diffraction patterns

In the original implementation of PIE, the object was updated as defined in Eq. (9) after processing each probe-response pair. In our algorithm, we average all updates after processing all fringe patterns of a particular fringe orientation. To investigate the difference, we investigated how well the object is reconstructed when only a subset of the full measurement range is taken into account. The results are shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Reconstruction results for different numbers of input measurements. Three parameters have been varied: 1) the number of orientations nor, 2) the number of fringe patterns per orientation nφ, and 3) the number of orientations that are skipped when selecting a subset nskip (i.e. nskip=2 means 2× larger angle between consecutive orientations). The left column in every image is the retrieval result for the regular (S)PIE, right column is retrieved using our mean-(SR)-PIE. The top row (Orig) is without sub-pixel resolution, bottom row (Subpix.) is with 2×2 sub-pixel resolution. The number of iterations has been adjusted in such a way that the total number of feedback loops have been processed is nits = 1836/(nor · nφ).

Download Full Size | PDF

As visible in the top row, including eight or nine different orientations has a minimal impact on the reconstructed result. However, when the number of images is reduced to six, the classical PIE algorithm with sub-pixel resolution starts to fail, even though mean-SR-PIE still provides acceptable image quality. The reconstruction artefacts resemble Fig. 3c, where the laser coherence is overestimated. When the number of measurements is decreased to only three, but the number of phases per orientation is increased to eight, the object is still properly constructed. In this case, increasing the grating angle intervals leads to an improved reconstruction. For only two orientations, the normal object reconstruction seems to fail completely for the normal s-PIE, while mean-SR-PIE is still able to reconstruct some features.

For more complicated samples such as the mosquito wing, this analysis does lead to different numbers for the minimum amount of required measurements, and it is to be expected that the required number of measurements is dependent on the complexity of the sample. Therefore we cannot claim a true “minimal” number of measurements, although it is reasonable to conclude that more than one fringe orientation is required for normal image reconstruction (as also indicated by [20]), and more than two fringe orientations for sub-pixel resolution.

D. Table with retrieval properties

Tables Icon

Table 1. Overview of settings used for retrieval of all the objects in the paper. d : Automatically retrieved back-propagation distance. nor: Number of fringe orientations measured. nϕ: Number of diffraction patterns selected per fringe orientation. nits: Number of ptychographical iterations used for reconstruction. λ: Illumination wavelength. : Average coherence as extracted from the reference measurements. α: Angle between different grating orientations. CM: Coherence multiplier as defined in the main text.

Funding

Netherlands Organisation for Scientific Research (NWO) (13934); European Research Council (ERC) (637476).

Acknowledgments

We thank Joleen Traets and Jeroen van Zon for providing us with the C. elegans sample.

References

1. J. R. Fienup, “Phase retrieval algorithms: A personal tour [Invited],” Appl. Opt. 52, 45–56 (2013). [CrossRef]   [PubMed]  

2. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]   [PubMed]  

3. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3, 27–29 (1978). [CrossRef]   [PubMed]  

4. S. Marchesini, H. He, H. N. Chapman, S. P. Hau-Riege, A. Noy, M. R. Howells, U. Weierstall, and J. C. H. Spence, “X-ray image reconstruction from a diffraction pattern alone,” Phys. Rev. B 68, 140101 (2003). [CrossRef]  

5. D. W. E. Noom, K. S. E. Eikema, and S. Witte, “Lensless phase contrast microscopy based on multiwavelength Fresnel diffraction,” Opt. Lett. 39, 193–196 (2014). [CrossRef]   [PubMed]  

6. D. W. E. Noom, D. E. Boonzajer Flaes, E. Labordus, K. S. E. Eikema, and S. Witte, “High-speed multi-wavelength Fresnel diffraction imaging,” Opt. Express 22, 30504–30511 (2014). [CrossRef]  

7. M. Sanz, J. A. Picazo-Bueno, J. García, and V. Micó, “Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm,” Opt. Express 23, 21352–21365 (2015). [CrossRef]   [PubMed]  

8. W. Harm, C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Lensless imaging through thin diffusive media,” Opt. Express 22, 22146–22156 (2014). [CrossRef]   [PubMed]  

9. R. M. Clare, M. Stockmar, M. Dierolf, I. Zanette, and F. Pfeiffer, “Characterization of near-field ptychography,” Opt. Express 23, 19728 (2015). [CrossRef]   [PubMed]  

10. M. Stockmar, P. Cloetens, I. Zanette, B. Enders, M. Dierolf, F. Pfeiffer, and P. Thibault, “Near-field ptychography: Phase retrieval for inline holography using a structured illumination,” Sci. Rep. 3, 1927 (2013). [CrossRef]   [PubMed]  

11. J. M. Rodenburg and H. M. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795–4797 (2004). [CrossRef]  

12. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21, 13592–13606 (2013). [CrossRef]   [PubMed]  

13. P. Dwivedi, A. P. Konijnenberg, S. F. Pereira, and H. P. Urbach, “Lateral position correction in ptychography using the gradient of intensity patterns,” Ultramicroscopy 192, 29–36 (2018). [CrossRef]   [PubMed]  

14. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494, 68–71 (2013). [CrossRef]   [PubMed]  

15. J. Zhong, L. Tian, P. Varma, and L. Waller, “Nonlinear Optimization Algorithm for Partially Coherent Phase Retrieval and Source Recovery,” IEEE Trans. Comput. Imaging 2, 310–322 (2016). [CrossRef]  

16. N. Burdet, X. Shi, D. Parks, J. N. Clark, X. Huang, S. D. Kevan, and I. K. Robinson, “Evaluation of partial coherence correction in X-ray ptychography,” Opt. Express 23, 5452 (2015). [CrossRef]   [PubMed]  

17. P. Li, T. Edo, D. J. Batey, J. M. Rodenburg, and A. M. Maiden, “Breaking ambiguities in mixed state ptychography,” Opt. Express 24, 9038–9052 (2016). [CrossRef]   [PubMed]  

18. L. Loetgering, H. Froese, T. Wilhein, and M. Rose, “Phase retrieval via propagation-based interferometry,” Phys. Rev. A 95, 033819 (2017). [CrossRef]  

19. M.-C. Zdora, P. Thibault, T. Zhou, F. J. Koch, J. Romell, S. Sala, A. Last, C. Rau, and I. Zanette, “X-ray Phase-Contrast Imaging and Metrology through Unified Modulated Pattern Analysis,” Phys. Rev. Lett. 118, 203903 (2017). [CrossRef]   [PubMed]  

20. C. Falldorf, C. von Kopylow, and R. B. Bergmann, “Wave field sensing by means of computational shear interferometry,” J. Opt. Soc. Am. A 30, 1905 (2013). [CrossRef]  

21. A. M. Maiden, M. J. Humphry, F. Zhang, and J. M. Rodenburg, “Superresolution imaging via ptychography,” J. Opt. Soc. Am. A 28, 604–612 (2011). [CrossRef]  

22. J. Rodenburg, A. Hurst, and A. Cullis, “Transmission microscopy without lenses for objects of unlimited size,” Ultramicroscopy 107, 227–231 (2007). [CrossRef]  

23. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4, 736 (2017). [CrossRef]  

24. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]   [PubMed]  

25. M. Odstrcil, P. Baksh, S. A. Boden, R. Card, J. E. Chad, J. G. Frey, and W. S. Brocklesby, “Ptychographic coherent diffractive imaging with orthogonal probe relaxation,” Opt. Express 24, 8360 (2016). [CrossRef]   [PubMed]  

26. E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. , OL 24, 291–293 (1999). [CrossRef]  

27. J. Vargas, J. A. Quiroga, and T. Belenguer, “Phase-shifting interferometry based on principal component analysis,” Opt. Lett. 36, 1326 (2011). [CrossRef]   [PubMed]  

28. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156 (2008). [CrossRef]   [PubMed]  

29. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the Transmission Matrix in Optics: An Approach to the Study and Control of Light Propagation in Disordered Media,” Phys. Rev. Lett. 104, 100601 (2010). [CrossRef]   [PubMed]  

30. T. Čižmár, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics 4, 388–394 (2010). [CrossRef]  

31. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181 (2010). [CrossRef]   [PubMed]  

32. G. Zheng, S. A. Lee, Y. Antebi, M. B. Elowitz, and C. Yang, “The ePetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (SPSM),” PNAS 108, 16889–16894 (2011). [CrossRef]   [PubMed]  

33. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). [CrossRef]   [PubMed]  

34. M. Lei, X. Zhou, D. Dan, J. Qian, and B. Yao, “Fast DMD based super-resolution structured illumination microscopy,” in Frontiers in Optics 2016 (2016), Paper FF3A.5, (Optical Society of America, 2016), p. FF3A.5. [CrossRef]  

35. D. J. Batey, Ptychographic imaging of mixed states, Ph.D. thesis (University of Sheffield, 2014).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Optical setup for interference probe ptychography. A laser (450 nm, Thorlabs LP450-SF15 or 520 nm, Thorlabs LP520-SF15) illuminates a grating (120 lp/mm, Edmund optics, #66-342), which is imaged onto the sample using a 4f-imaging system (f1=25 mm, f2=100mm), while blocking the zero order diffraction. The camera (IDS UI-5482LE-M) is positioned a short distance (approximately 1 mm) behind the sample. The illumination beams are called P(0) and P(1), at the sample plane. The grating is mounted on a rotatable mount to allow easy changing of the orientation of the fringes. The sample is mounted on a transition stage in order to repeatedly remove and insert it for the reference measurement. In order to shift the illumination pattern over the sample, the setup is mechanically disturbed. The laser is triggered to the camera to minimize motion blurring.
Fig. 2
Fig. 2 (a): estimated object shifts for probe orientations 1–8 of the USAF target. Most of the shifts are in the vertical direction, indicating that the object was slowly drifting down. (b): Geometrical indication of tilting a probe in order to shift the location of the reconstructed object without affecting the measured intensity patterns. If for a particular orientation i the initial back propagated image is shifted by Δx with respect to the required position, the overall angle of the two probes P i ( 0 ) , P i ( 1 ) is tilted by angle −θ, such that after a new back propagation using Eq. (15), the object estimate is shifted to the desired position.
Fig. 3
Fig. 3 (a1–a3) Measured diffraction patterns for different path length differences (a1–a2) and different orientations (a3). (b) Back-focussed image of group 7.3–7.6, computed using a single measured diffraction pattern 𝒫 d [ I ( 0 ) ] , showing artefacts created by the twin image. (c) Reconstructed intensity without taking into account the forward model as defined in Eq. (8). The area shown in a1-3 is roughly indicated by the red dashed area. (d) Reconstructed intensity including the forward model to account for imperfect mutual coherence.
Fig. 4
Fig. 4 (a–d) Intensity reconstruction of group 7 from a retrieved USAF sample, for a reconstruction with a numerical wavefront tilt corresponding to an object shift of 0, 0.25, 0.50, and 0.75 pixel. The smallest features that can be resolved are dependent on the numerical shift, even though the input data is identical. Reconstructing the data using MSR-PIE (e), reveals that the bars in group 7.6 with a width of 2.2 μm can now be separated.
Fig. 5
Fig. 5 (a) Amplitude-phase reconstruction of a mosquito wing, color represents phase as indicated in the lower left corner. (b) Amplitude of reconstructed field. (c1) A detail of the edge of the wing shows that the optical thickness increases from the edge to the center of the cell. (c2) Amplitude contrast closely resembles the smallest features visible with a microscope at similar resolution (c3). (d1) A thin vein in the center (indicated by the white arrow) can easily be tracked with the amplitude-phase reconstruction but is hardly visible in the amplitude reconstruction (d2) or under the microscope (d3). The structure within each cell is caused by tiny hairs and is properly reconstructed as shown in the green inset.
Fig. 6
Fig. 6 (a) Reconstructed amplitude image of an anesthesized C. elegans worm, with three insets, showing the edge of the worm (I), eggs (II) and the uterus (III). While the uterus can be recognized, most of the other features are not resolved. (b) Same dataset reconstructed with the s-PIE algorithm. The increase in resolution allows to see that the edge of the worm consists of two layers (I). Some egg-like features are visible (II), and more detail can be seen in the uterus. (c) Conventional bright-field microscope image of the same worm. In order to enhance the contrast in dark parts of the worm, the amplitude is shown instead of the intensity.
Fig. 7
Fig. 7 l2 norm of the remaining patterns in I′n after selecting npat measurements using the described procedure, for all orientations of the dataset used to reconstruct the USAF sample. For the orange dataset, the background patterns I i r ( 0 , 1 ) have been subtracted, indicating that typically only two patterns are required to extract most of the information. For the blue dataset, the raw measurements have been used without background subtraction, which seems to give slightly better results when more than three measurements have been included.
Fig. 8
Fig. 8 Reconstruction results for different numbers of input measurements. Three parameters have been varied: 1) the number of orientations nor, 2) the number of fringe patterns per orientation nφ, and 3) the number of orientations that are skipped when selecting a subset nskip (i.e. nskip=2 means 2× larger angle between consecutive orientations). The left column in every image is the retrieval result for the regular (S)PIE, right column is retrieved using our mean-(SR)-PIE. The top row (Orig) is without sub-pixel resolution, bottom row (Subpix.) is with 2×2 sub-pixel resolution. The number of iterations has been adjusted in such a way that the total number of feedback loops have been processed is nits = 1836/(nor · nφ).

Tables (1)

Tables Icon

Table 1 Overview of settings used for retrieval of all the objects in the paper. d : Automatically retrieved back-propagation distance. nor: Number of fringe orientations measured. nϕ: Number of diffraction patterns selected per fringe orientation. nits: Number of ptychographical iterations used for reconstruction. λ: Illumination wavelength. : Average coherence as extracted from the reference measurements. α: Angle between different grating orientations. CM: Coherence multiplier as defined in the main text.

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

Ψ g , n , i , j = Ψ g , n , i , j ( 0 ) + Ψ g , n , i , j ( 1 )
Ψ g , n , i , j ( 0 ) = P i ( 0 ) O g , n exp ( + ι δ i , j / 2 )
Ψ g , n , i , j ( 1 ) = P i ( 1 ) O g , n exp ( ι δ i , j / 2 )
Ψ ˜ g , n , i , j = Ψ ˜ g , n , i , j ( 0 ) + Ψ ˜ g , n , i , j ( 1 )
Ψ ˜ g , n , i , j ( 0 ) = 𝒫 d [ Ψ g , n , i , j ( 0 ) ]
Ψ ˜ g , n , i , j ( 1 ) = 𝒫 d [ Ψ g , n , i , j ( 1 ) ]
Ψ ˜ = Ψ ˜ I measured I ˜ expected
J ˜ g , n , i , j = | Ψ ˜ g , n , i , j ( 0 ) | 2 + | Ψ ˜ g , n , i , j ( 1 ) | 2 + 2 γ C i ( Ψ ˜ g , n , i , j ( 0 ) Ψ ˜ g , n , i , j ( 1 ) * )
Ψ c , n , i , j = 𝒫 d [ Ψ ˜ g , n , i , j I i , j J ˜ g , n , i , j ]
P i , j = P i ( 0 ) exp ( + ι δ i , j / 2 ) + P i ( 1 ) exp ( ι δ i , j / 2 )
O c , n = O g , n + P i , j * ( 1 α ) | P i , j | 2 + α | P i , j | max ( Ψ c , n , i , j Ψ g , n , i , j ) .
Q ˜ i ( 0 ) = I i r ( 0 ) exp ( ι 2 ( k x x + k y y ) )
Q ˜ i ( 1 ) = I i r ( 1 ) exp ( ι 2 ( k x x + k y y ) + ι Φ i ) .
D i ( 0 ) ( z ) = | 𝒫 z [ ( I i ( 0 ) I i r ( 0 ) ) ] | 2
D i ( 1 ) ( z ) = | 𝒫 z [ ( I i ( 1 ) I i r ( 1 ) ) e + ι Φ i ] | 2
d i = argmin z D i ( 0 ) ( z ) D i ( 1 ) ( z ) 2
tan θ = Δ x i p d = Y
Y = Δ x i p / d ,
T ( θ ) = 2 π λ tan θ .
Δ k Δ x i ( x ) = 2 π λ p 2 Δ x i d x 2 N
P i ( 0 ) = 𝒫 d [ Q ˜ i ( 0 ) exp ( ι ( Δ k Δ y i ( y ) + Δ k Δ x i ( x ) ) ) ]
P i ( 1 ) = 𝒫 d [ Q ˜ i ( 1 ) exp ( ι ( Δ k Δ y i ( y ) + Δ k Δ x i ( x ) ) ) ]
I i , j mod = I i , j | Ψ ˜ i ( 0 ) | 2 + | Ψ ˜ i ( 1 ) | 2
δ i , j = argmax δ ( F i , j ( δ ) )
f i , j ( δ ) = K i ( δ ) | I i , j mod
K i ( δ ) = 2 C i ( Ψ ˜ i ( 0 ) Ψ ˜ i ( 1 ) * exp ι δ )
δ i , j = Arg ( 1 N i = 0 N f i , j ( δ ) exp ι δ ) .
J ˜ g , n , i , j ( x , y ) = J ˜ g , n , i , j ( 2 x + 0 , 2 y + 0 ) + J ˜ g , n , i , j ( 2 x + 0 , 2 y + 1 ) + J ˜ g , n , i , j ( 2 x + 1 , 2 y + 0 ) + J ˜ g , n , i , j ( 2 x + 1 , 2 y + 1 )
I n = I n I n | I 0 I n | I n I 0
n next = argmax n ( I n 2 )
O = O g , n + 5 Δ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.