## Abstract

Measuring transmission and optical thickness of an object with a single intensity recording is desired in many fields of imaging research. One possibility to achieve this is to employ phase retrieval algorithms. We propose a method to significantly improve the performance of such algorithms in optical imaging. The method relies on introducing a specially designed phase object into the specimen plane during the image recording, which serves as a constraint in the subsequent phase retrieval algorithm. This leads to faster algorithm convergence and improved final accuracy. Quantitative imaging can be performed by a single recording of the resulting diffraction pattern in the camera plane, without using lenses or other optical elements. The method allows effective suppression of the “twin-image”, an artefact that appears when holograms are read out. Results from numerical simulations and experiments confirm a high accuracy which can be comparable to that of phase-stepping interferometry.

© 2012 Optical Society of America

## 1. Introduction

Solutions to the problem of obtaining amplitude and phase of an electromagnetic field from a single intensity measurement are desirable in many areas of imaging research, predominantly in imaging modalities where the intrinsic radiation properties make it difficult to apply the approved interferometry techniques of visible light optics, such as X-ray, gamma or electron-beam imaging. Inline or Gabor holography [1] provides a solution to a certain extent. There, one illuminates the object with a sufficiently coherent beam and records the intensity of the interference pattern created by the scattered and unscattered wave parts. Due to its simplicity and the availability of high resolution digital image sensors it is nowadays frequently used in various fields of research [2–6]. However, the loss of phase information at the recording process gives rise to an inherent ambiguity: when the hologram is read out (either optically or numerically), a potentially disturbing “twin-image” appears in addition to the original object. This twin-image represents the conjugated diffraction order of the recorded intensity hologram and thus differs from the original object in the sense that at the position of the hologram it is the conjugated version of the original object field. Hence, the twin-image resembles a mirrored version of the original and appears in a different axial plane if the object was placed in a finite distance from the image sensor during the recording process. Since the invention of holography it was sought to suppress the twin-image, first by optical means [7–9], later also numerically, for instance by using linear filtering [10, 11], or iterative methods [12–17] that often employ variants of the Gerchberg-Saxton algorithm [18].

The basic principle of such algorithms is to numerically propagate the light field back and forth between the recording and the object plane, in each plane applying certain constraints. Ideally, the algorithm ends up with a totally suppressed twin-image, leaving only the real object. In the recording plane, an obvious constraint is the recorded intensity, which is however a valid boundary condition for both, the real image and its undesired twin. Therefore, in the object plane one must apply a constraint which breaks the ambiguity between real and twin-image to make the algorithm converge towards the true solution. Depending on the application, a variety of different constraints can be introduced [19], although it is in general not possible to find a method that can guarantee that the algorithm converges towards the true solution [19]. It often happens that the algorithm gets “trapped” in local error-minima and ends up with an imprecise approximation of the original object. The final performance, i.e. the convergence speed and the accuracy of the result depend crucially on the strength of the imposed boundary conditions.

One possibility of establishing a boundary condition is to place another, known object alongside the unknown sample into the object plane during the recording process. In the iterative reconstruction routine one can then partly replace the field in the object plane by this known object. Such known “reference” objects are often apertures of a certain size and shape [20], however, one could also think of other objects that have a more complex structure, e.g. transmissive objects with a spatially varying optical thickness. It is also possible to utilise information about the illumination wave outside the object’s boundaries instead of using real objects [15, 17]. The required localisation of the sample’s boundaries can be implemented in the search algorithm [17]. The use of patterned illumination has also been shown to be beneficial [21]. All these boundary conditions are variants of support constraints [20]. Originally, in the context of using apertures as constraints, the object support was defined as the area where the object function is non-zero. More generally, it can be defined as the area where the field is *unknown*. Within the frame of this manuscript, we will refer to the remaining area, i.e. the area where the field is known, as the “periphery”.

Considering the approach of introducing a known object into the sample plane to serve as constraint one might ask the question how it should be ideally structured to impose strong boundary conditions. It is well known that objects of asymmetric shape lead to better results than objects of symmetric shape [20], because phase retrieval is more “unique” in such cases. Furthermore, it appears reasonable that one has to maximise the “crosstalk” between the known periphery and the unknown object in order to enhance the “flow of information” from periphery to object that takes place at each iterative step. Since there is naturally no such crosstalk in the object plane because object and periphery are by definition spatially separated, one can only maximise it in the recording plane, where both fields mix due to diffraction. To this end we propose to use a designed diffractive periphery that maximises the observable crosstalk, i.e. that alters the object’s diffraction pattern in the recording plane as much as possible by its presence. A value to quantify this crosstalk can be defined as follows:

*E*is the reference field, i.e. the light field in the recording plane when no object is inserted, and

_{ref}*E*the entire field created by periphery

_{tot}*and*object. The function 〈...〉 denotes spatial averaging over the entire recording plane. The expression above is basically the correlation of

*E*with

_{ref}*E*. In case the object has only little influence on the periphery diffraction pattern, we have

_{tot}*E*≈

_{ref}*E*and the expression takes a value close to one. We expect an unsatisfactory algorithm performance for this case. On the other hand, if the fields scattered by periphery and object interfere strongly at the recording plane,

_{tot}*E*and

_{ref}*E*will show significant differences. Then the correlation term 〈|

_{tot}*E*| |

_{ref}*E*|〉 in Eq. 1 will be smaller than one and we expect an improved phase retrieval performance. The theoretical optimal value of

_{tot}*M*is zero.

The principle is outlined in Fig. 1. The drawing on the left illustrates the case of plane wave illumination. The wave parts passing beside and going through the object start to interfere as they propagate towards the recording plane. For lensless imaging applications aiming at high resolution, the recording plane is located relatively close, hence the size of the region where both fields overlap is quite small. This represents an explanation for the general fact that twin-image suppression performs worse for large Fresnel numbers – a behaviour that was also pointed out by other authors [22].

The sketch on the right depicts our proposal, where the periphery is designed to shape the illumination beam for maximal overlapping with the object’s diffraction pattern in the recording plane. In order to achieve a large overlap, the peripheral structure could be similar to that of a diffractive lens or axicon. Another possibility is to design the periphery using iterative hologram computation algorithms. In this case, the periphery phase will adopt a kind of scrambled appearance, which is however not random but optimised to condense the light in the desired way. It should be noted that also a simple diffuser disc leads to an enhanced field overlapping in the recording plane and can thus be expected to cause improved phase retrieval performance. Because of the simplicity of this approach, we performed experiments with such discs and investigated the obtainable quality.

## 2. Experiment with a diffuser disc

Employing a polymer diffuser disc as peripheral object, we took single intensity recordings of the light field scattered by a transparent insect wing and applied a phase retrieval algorithm (see section 3 for details about the algorithm implementation) to retrieve its amplitude and phase functions. We tested holographic diffusers from *Edmund Optics* (NT65-883, NT65-554 and NT47-996) with specified cut-off diffraction angles of 0.5°, 1° and 5°, respectively, finding the 1° scatterer to perform best. The value *M* for this specific combination of specimen and diffuser was measured to be 0.988, compared to 0.993 when the diffuser disc was removed and the object illuminated by a plane wave. Although the change in the parameter *M* appears marginal, it has nevertheless a great influence on the achievable quality.

A sketch of the set-up is shown in Fig. 2(a). A hole of 3 mm diameter was drilled into the centre of the diffuser, which was then placed at a distance of about 15 cm from a CMOS sensor (Canon EOS 1000D) with a size of 22.2×14.8 mm^{2} and a pixel side length of 5.7 *μ*m. These dimensions define the achievable resolution to be on the order of 10 *μ*m. The diffuser was illuminated with a helium-neon laser (633 nm wavelength) of Gaussian shape and 9 mm diameter (1/*e*^{2}), such that a part of the beam passed undiffracted through the hole in the diffuser. This part illuminated the object in the specimen plane, which was located a couple of millimetres behind the diffuser disc. At the recording plane, the field diffracted by the diffuser overlapped well with the remainder of the light. A typical recorded image is shown in Fig. 2(b). From this single image, the amplitude and phase of the insect wing could be reconstructed with high accuracy by the phase retrieval algorithm (see Fig. 2(c)).

To successfully apply the phase retrieval algorithm, it is necessary to have accurate knowledge of the light field scattered by the diffuser. This field corresponds to the value *E _{ref}* of Eq. 1. On this account a beam splitter cube was placed between object and recording planes (not indicated in the figure), which allowed to introduce a reference wave to interfere with the scattered field. Amplitude and phase of the scattered field were measured by stepping the phase of the reference beam (six equidistant phase steps in the interval [0,2

*π*]) and taking images of the corresponding interferograms. It should be noted that this interferometric measurement has to be conducted only a single time. Once the diffuser disc has been characterised, amplitude and phase information of arbitrary samples can be measured from single intensity recordings of their diffraction pattern, without using any imaging optics.

It is worthwhile reminding oneself of the possibility to perform numerical refocussing, once the complex field has been obtained. Figure 3 contains images which demonstrate this ability. The image on the left contains the amplitude of the insect wing, which is also shown in Fig. 2(c). The right image shows the amplitude of the same field, but defocussed by seven millimetres, such that the holey scattering disc comes into focus. The development of the field amplitude in axial steps of 1 mm in a certain volume around the specimen is provided by means of a movie ( Media 1).

## 3. Phase retrieval algorithm

Our algorithm is based on the error-reduction algorithm [23], which is a variant of the Gerchberg-Saxton algorithm. Its input information are the interferometrically measured complex field that is scattered by the periphery (referred to as “reference field” *E _{ref}*) and a single intensity recording of the field scattered by object plus periphery (referred to as “total field”

*E*), similar to that shown in Fig. 2(b). Both fields are measured in the recording plane, therefore this is also the plane where the algorithm has its starting point. In an initial step, both fields are combined to a new field

_{tot}*E*, defined in the recording plane:

**r**represents the transverse coordinate vector and

*z*the axial coordinate of the recording plane. In the following, the phase retrieval algorithm is executed, which iteratively performs the following four steps:

_{rec}- The field
*E*of Eq. 2 is numerically propagated into the object plane using an appropriate propagation operator*F*: - The support-constraint is applied by replacing
*E*(**r**,*z*) by the reference field_{obj}*E*(_{ref}**r**,*z*) in the entire object plane except at the area which is occupied by the object. This area defines the object support, and the remaining area the periphery. Since the object’s boundaries cannot be clearly identified for the first couple of iterations, we initially choose the object support large enough to contain any possible sample. This initial support is referred to as_{obj}*S*, which for the experiment described in section 2 corresponds to the area defined by the hole in the diffuser. The resulting new field_{ini}*E*′ can be expressed as:$${E}^{\prime}(\mathbf{r},{z}_{\mathit{obj}})=\{\begin{array}{ll}E(\mathbf{r},{z}_{\mathit{obj}})\hfill & \text{if}\hspace{0.17em}\mathbf{r}\in {S}_{\mathit{ini}}\hfill \\ {E}_{\mathit{ref}}(\mathbf{r},{z}_{\mathit{obj}})\hfill & \text{if}\hspace{0.17em}\mathbf{r}\notin {S}_{\mathit{ini}}.\hfill \end{array}$$ - Finally, we apply our second constraint, which is the recorded modulus of the object field
*E*:_{tot}

The above steps (i)–(iv) define one iteration of the phase-retrieval algorithm. After a couple of iterations, the object boundaries are typically very well defined. This allows to replace the initial object support *S _{ini}* by a refined one (

*S*) that closely matches the sample shape (see Fig. 4). This refinement is done by automated object segmentation and does not require the user to take any action. It should be noted that redefining the object support also implies a redefinition of the periphery area: parts of the area that formerly belonged to the object support become parts of the periphery. The object-support refinement is similar to that described in Ref. [17] and typically leads to significantly improved results.

_{final}Optionally, one may also include an additional constraint in step (ii): at positions *within* the object-support where |*E*(**r**,*z _{obj}*)| > |

*E*(

_{ref}**r**,

*z*)|, i.e. where the object seems to amplify the illumination intensity, one can replace

_{obj}*E*(

**r**,

*r*

_{1}) by |

*E*(

_{ref}**r**,

*z*)|

_{obj}*E*(

**r**,

*r*

_{1})/|

*E*(

**r**,

*r*

_{1})| [12]. This, however, restricts the method to objects that do not amplify light. It should be noted that this additional constraint was found to have only a weak effect on the performance of the algorithm for the objects investigated. Therefore it was not applied in the numerical simulations presented in this manuscript.

## 4. Experimental comparison with alternative methods

An important question is how well the presented method compares to alternative ones, for example inline holography or related phase retrieval techniques, in terms of quantitative accuracy. We addressed this question by applying four different techniques to measure amplitude and phase of a sample: (i) our phase retrieval method with scattering periphery; (ii) phase retrieval *without* scattering periphery (plane wave illumination); (iii) inline holography; (iv) phase-stepping interferometry (serving as a quality standard). All four methods could be straightforwardly tested with the set-up shown in Fig. 2. Method (ii) required to remove the diffuser disc, but otherwise follows the same procedure as method (i). Inline holography was performed by numerically propagating the modulus of the sample’s diffraction pattern from the recording plane into the object plane. Also here the scattering disc was removed during the recording process. Phase-stepping interferometry was performed with the method described in section 2.

All experimental results are summarised in Fig. 5. Every method represents a different column in the image matrix. The first two image rows contain the reconstructed amplitude and phase of the wing. The third row contains the modulus of the difference between the reconstructed and interferometrically measured fields. The phase-retrieval results were obtained by 110 iterations of the error-reduction algorithm. After the first 10 iterations, the preliminary result allowed for object segmentation and thus for an accurate localisation of the object’s boundaries. Taking the interferometry results (first column) as reference, one can detect a clear quality difference between the results of the other three methods. Clearly, the random phase approach performs best, delivering a complex field that appears almost indistinguishable from the interferometric measurement. The approach using the plane wave periphery delivers results that seem quite accurate at a first glance but reveal significant reconstruction errors in both, amplitude and phase at a closer look. Finally, the inline hologram reconstruction – as expected – returns a superposition of real and twin-image, and therefore the worst results in view of quantitative accuracy. Considering the simplicity and robustness of this approach, however, it represents a valuable tool in many applications. Furthermore, it should be emphasised that the twin-image does not necessarily represent a problem for inline holography. Depending on the set-up parameters, its influence can vary from heavily disturbing to not recognisable [24].

The two graphs below the image matrix compare the reconstructed phase along different sections through the insect wing. The locations of the sections are indicated in the interferometry phase image. The standard deviations of each method’s phase reconstruction to the interferometric measurement are stated next to the graphs. Also here the random periphery approach delivers the best results with root mean square values around 0.1 rad which is around *λ*/60.

## 5. Numerical simulations

So far we have shown that using a *randomly* scattering periphery can lead to improved results. Let us now investigate *optimised* peripheries. We have conducted a series of numerical simulations in order to provide further support for the role of *M* in view of phase retrieval performance. We have investigated the use of different peripheries with varying values of *M*, amongst them a random-phase periphery similar to that used in our experiments, but also peripheries that have been specially designed to maximise *M*.

Although an object-independent definition of *M* is clearly impossible, it is still feasible to define it for a typical object spectrum. Most objects scatter only little light into high diffraction angles, hence a reasonable approach to increase *M* is to design a periphery that shapes a Gaussian intensity distribution at the centre of the recording plane. Once this target field has been defined, an iterative Fourier transform algorithm such as the Gerchberg Saxton algorithm can be used to create a diffractive optical element (DOE) which corresponds to the desired periphery in the object plane.

The simulation parameters were chosen to resemble our experimental conditions. We assumed a top-hat illumination with light of *λ* = 633 nm wavelength, a distance of 15 cm between object and recording plane, and a circular periphery of 6 mm diameter with a 3 mm hole in its centre. Within this hole, we placed different phase objects (see insets in Fig. 6), each of them showing a peak-to-valley phase of 2*π*. One of the objects was assumed to be also partly absorbing. The objects were deliberately chosen to differ in terms of contrast and spatial frequency content. In contrast to the experimental work where the object is unknown, it is not necessary in the simulations to define a large initial object-support *S _{ini}* that is later refined: tight-fitting object supports were used instead right from the start of the phase retrieval algorithm. The approximate boundaries of these support regions are indicated by red lines around the objects in Fig. 6. The numerical grid spacing was 11.4

*μ*m and roughly matches the achievable resolution defined by

*λ*/NA, where NA stands for the numerical aperture of the imaging system. We employed the scalar spectrum of plane waves method for simulating the light propagation.

To investigate the role of *M*, we designed a set of 16 different peripheries, each producing a spot of light with Gaussian envelope and random phase at the recording plane centre. The spots have equal total power but different widths *w*, which are defined as the full widths at 1/*e*^{2} intensity level and expressed in angular units as seen from the object plane. Therefore they show a different overlap with the object field and thus different values for *M*. For each periphery, we employed the error reduction algorithm to retrieve all four objects from single intensity recordings. The algorithm was stopped after three iterations and an error parameter *σ* calculated as the RMS difference between the original and reconstructed object fields:

*w*= 0.8°. This approximately applies for all four objects. The diagrams also contain plots of the corresponding values

*M*(red curves). The value

*M*correlates well with the residual errors.

In a second series of simulations, the error convergence behaviour of one of the four phase objects (inset of Fig. 6(b)) was investigated for different peripheries. First we simulated a “plane wave” periphery, which means that the object is illuminated with a plane wave without using any additional peripheral object. This case is closely related to the work described in [17]. We further analysed a diffusing periphery that scatters light within a cone of 1° opening angle. The opening angle shall be denoted as *α _{max}* in order to avoid confusion with the parameter

*w*of designed peripheries. The scattering angle profile was chosen to resemble those of the diffuser disc used in the experiments. At last we investigated the performance of the designed periphery which produces a Gaussian spot of

*w*= 0.8° in the recording plane. The results are summarised in Fig. 6(b). Again, the algorithm performance is reflected by the corresponding values for

*M*.

Finally, we investigated the role of the periphery size, or rather that of the *scattering part* of the periphery. At this point it is important to discriminate between the entire periphery, which was defined as the area where the field is known, and the part of it which scatters and therefore leads to a significant increase of *M*. We shall refer to this scattering area as *A _{scat}*. A series of simulations was performed with different ratios of

*A*to object support

_{scat}*S*, in order to answer the question how small

*A*can be made without having a significant loss in quality. This series of simulations was performed twice: for a designed (

_{scat}*w*= 0.8°) and a randomly scattering (

*α*= 1°) peripheral object. We took our department logo as unknown phase object. The outer diameter of the circular periphery area was gradually decreased and for each diameter the phase-retrieval procedure executed for 30 iterations. The obtained residual errors are plotted in the graph of Fig. 7. In the case of the designed periphery they suggest that reasonable results can be obtained, if the scattering part of the periphery area exceeds that of the object by a factor of roughly two or more. The result of using the designed periphery at an area ratio of 1.6 is shown on the right. The image shows the reconstructed logo framed by the periphery. The object support and scattering area

_{max}*A*are coloured in red and yellow, respectively. We would like to emphasise that this last result is strictly speaking only applicable to the logo as

_{scat}*phase*object. Nevertheless it allows to visualise the influence of the periphery size on the achievable quality for more general objects.

## 6. Summary and discussion

We demonstrate a method that significantly improves the performance of phase retrieval algorithms for measuring the full complex-valued structure of an unknown object from a single intensity recording. Considered within the context of inline holography, the method has the ability to suppress the “twin-image”, which represents the disturbing conjugated diffraction order of recorded holograms. Our method is based upon introducing a specially designed phase object into the specimen plane during the recording process. This can be thought of as a specific support-constraint. If the structure of this additional (“peripheral”) object is used as constraint for the phase retrieval algorithm, its convergence speed is significantly improved and results of high precision can be obtained. The key to the achievable accuracy is thereby the precision to which the peripheral object is known. By incorporating an interferometer into the imaging set-up, it can be characterised precisely and *in situ*, which has the advantage that no subsequent image registration steps have to be taken [25]. In the phase retrieval procedure, the precise knowledge of the periphery is “transferred” to precise knowledge of the sample.

We present experimental results that directly demonstrate the proposed advantages. We experimentally compared the performances of two different peripheries: a plane wave and a plastic diffuser, the latter representing a cheap and easily available peripheral object. Results obtained from inline holography and phase-stepping interferometry were also included in the comparison. We find that using the plastic diffuser (albeit not yet optimised/customised) delivers the most accurate results, which can be comparable to the interferometric measurement.

We would like to emphasise that a more attractive way of putting our proposal into practice is to utilise patterned illumination rather than to introduce a real peripheral object into the specimen plane. For instance, one could use a DOE to shape the illumination wave such that it forms the desired periphery in the specimen plane. The use of dynamic devices such as SLMs would even allow to calculate and display object-specific peripheries in real time. Exploiting the potential of graphics processing units, the iterative phase retrieval can also be fast, with a single iteration only taking a couple of milliseconds.

## Acknowledgments

This work was supported by the
Austrian Science Foundation (FWF) Project No.
P19582-N20 and the
ERC Advanced Grant
247 024 *catchIT*.

## References and links

**1. **D. Gabor, “A new microscopic principle,” Nature **161**, 777–778 (1948). [CrossRef] [PubMed]

**2. **I. Moon, M. Daneshpanah, A. Anand, and B. Javidi, “Cell identification with computational 3-D holographic microscopy,” Opt. Photonics News **22**, 18–23 (2011). [CrossRef]

**3. **W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. U.S.A. **98**, 11301–11305 (2001). [CrossRef] [PubMed]

**4. **Q. Xu, K. Shi, H. Li, K. Choi, R. Horisaki, D. Brady, D. Psaltis, and Z. Liu, “Inline holographic coherent anti-Stokes Raman microscopy,” Opt. Express **18**, 8213–8219 (2010). [CrossRef] [PubMed]

**5. **P. Spanne, C. Raven, I. Snigireva, and A. Snigirev, “In-line holography and phase-contrast microtomography with high energy x-rays,” Phys. Med. Biol. **44**, 741–749 (1999). [CrossRef] [PubMed]

**6. **M. Kanka, R. Riesenberg, P. Petruck, and C. Graulig, “High resolution (NA=0.8) in lensless in-line holographic microscopy with glass sample carriers,” Opt. Lett. **36**, 3651–3653 (2011). [CrossRef] [PubMed]

**7. **W. L. Bragg and G. L. Rogers, “Elimination of the unwanted image in diffraction microscopy,” Nature **167**, 190–191 (1951). [CrossRef] [PubMed]

**8. **E. N. Leith and J. Upatnieks, “Wavefront reconstruction with continuous-tone objects,” J. Opt. Soc. Am. **53**, 1377–1381 (1963). [CrossRef]

**9. **O. Bryngdahl and A. Lohmann, “Single-sideband holography,” J. Opt. Soc. Am. **58**, 620–624 (1968). [CrossRef]

**10. **L. Onural and P. D. Scott, “Digital decoding of in-line holograms,” Opt. Eng. **26**, 1124–1132 (1987).

**11. **K. A. Nugent, “Twin-image elimination in Gabor holography,” Opt. Commun. **78**, 293–299 (1990). [CrossRef]

**12. **G. Liu and P. D. Scott, “Phase retrieval and twin-image elimination for in-line Fresnel holograms,” J. Opt. Soc. Am. A **4**, 159–165 (1987). [CrossRef]

**13. **G. Koren, F. Polack, and D. Joyeux, “Iterative algorithms for twin-image elimination in in-line holography using finite-support constraints,” J. Opt. Soc. Am. A **10**, 423–433 (1993). [CrossRef]

**14. **F. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Image reconstruction for in-line holography with the Yang-Gu algorithm,” Appl. Opt. **42**, 6452–6457 (2003). [CrossRef] [PubMed]

**15. **T. Latychevskaia and H.-W. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett. **98**, 233901 (2007). [CrossRef] [PubMed]

**16. **C. P. McElhinney, B. M. Hennelly, and T. J. Naughton, “Twin-image reduction in inline digital holography using an object segmentation heuristic,” J. Phys.: Conf. Ser. **139**, 012014 (2008). [CrossRef]

**17. **S. M. Raupach, “Cascaded adaptive-mask algorithm for twin-image removal and its application to digital holograms of ice crystals,” Appl. Opt. **48**, 287–301 (2009). [CrossRef] [PubMed]

**18. **R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Jena) **35**, 237–246 (1972).

**19. **V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A **20**, 40–55 (2003). [CrossRef]

**20. **J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A **4**, 118–123 (1987). [CrossRef]

**21. **J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” Opt. Express **14**, 498–508 (2006). [CrossRef] [PubMed]

**22. **G. Koren, D. Joyeux, and F. Polack, “Twin-image elimination in in-line holography of finite-support complex objects,” Opt. Lett. **16**, 1979–1981 (1991). [CrossRef] [PubMed]

**23. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**24. **J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. **45**, 836–850 (2006). [CrossRef] [PubMed]

**25. **S. Bernet, W. Harm, A. Jesacher, and M. Ritsch-Marte, “Lensless digital holography with diffuse illumination through a pseudo-random phase mask,’ Opt. Express **19**, 25113–25124 (2011). [CrossRef]