## Abstract

Microscopic imaging with a setup consisting of a pseudo-random phase mask, and an open CMOS
camera, but without an imaging objective, is demonstrated. The pseudo random
phase mask acts as a diffuser for an incoming laser beam, scattering a speckle
pattern to a CMOS chip, which is recorded once, as a reference. A sample which
is afterwards inserted *somewhere* in the optical beam path
changes the speckle pattern. A single (non-iterative) image processing step,
comparing the modified speckle pattern with the previously recorded one,
generates a sharp image of the sample. After a first calibration the method
works in real-time and allows quantitative imaging of complex (amplitude and
phase) samples in an extended three-dimensional volume. Since no lenses are
used, the method is free from lens aberrations. Compared to standard inline
holography the diffuse sample illumination improves the axial sectioning
capability by increasing the effective numerical aperture in the illumination
path, and it suppresses the undesired twin images. For demonstration, a high
resolution spatial light modulator (SLM) is programmed to act as the
pseudo-random phase mask. We show experimental results, imaging microscopic
biological samples, such as insects, within an extended volume at a distance of
15 cm with a transverse and longitudinal resolution of about 60
*μ*m and 400 *μ*m,
respectively.

© 2011 Optical Society of America

## 1. Introduction

Inline holography, invented by Gabor around 1948 [1, 2], is a holographic method where object- and reference beams are not spatially separated, i.e. the reference beam consists of the zero order Fourier component (or carrier wave) of the signal beam. The method has several advantages: it is very stable against vibrations or phase fluctuations, since the two beam components travel along the same path, and the longitudinal coherence length of the illumination light can be very low. The first optical holograms were in fact recorded with light from a mercury arc lamp. Since the development of high resolution digital cameras, digital Gabor holography has also been established as an alternative method in optical microscopy [3].

On the other hand a disadvantage of the method is that a "twin image" appears, which is an inverted (and phase conjugate) copy of the original image that often distorts the quality of the reconstructed image. Furthermore the plane wave illumination produces sharp “shadows” of all objects in the sample volume simultaneously, such that the reconstructed image of a transverse sample plane (“optical sectioning”) is disturbed by sharply reconstructed objects from other planes.

Here we demonstrate a modified concept of digital inline holography which avoids these disadvantages. In our approach the sample is illuminated with diffuse light, which has, however, a known phase distribution. The sample image is calculated from its scattered far-field (or Fresnel-regime) intensity distribution by numerically comparing the scattered speckle pattern with a previously recorded reference pattern.

In our experiment the sample is located between a high resolution pseudo-random phase mask and a CMOS camera chip. Since the phase of each pixel of the phase mask is known, the complex amplitude of the diffracted speckle pattern (without the sample) can be calculated in the plane of the image sensor by numerical Fresnel propagation. If a sample is then placed in the beam path, the diffracted speckle pattern changes accordingly. Since the phase of the undisturbed original speckle pattern is known (by numerical wave propagation of the illumination beam through the pseudo-random phase mask into the camera plane) a sharp image of the sample can be calculated in a single processing step, consisting mainly of a fast two-dimensional Fourier transform. The reconstructed image field can afterwards be propagated numerically to any transverse plane in the optical beam path, thus providing three-dimensional image information by numerical post-processing of the recorded speckle pattern.

One main advantage of using an illumination field scattered from a random phase mask, instead of
a plane wave as in inline holography, is an increase of the longitudinal resolution
or sectioning capability, i.e. the effective numerical aperture (NA), which is given
by the sum of illumination and imaging NAs, is approximately doubled as compared to
plane wave illumination. Since the longitudinal resolution scales with
1/NA^{2}, this effect is particularly significant. We demonstrate this
by simultaneously imaging two millimeter-sized samples with a longitudinal
separation of 5 cm in a single speckle pattern, demonstrating that both of the
samples can be reconstructed independently. Furthermore the images can be
reconstructed without the appearance of disturbing twin images. Due to the diffuse
reference beam, only the desired first diffraction order is sharply reconstructed,
whereas all other orders (including the minus first order which is responsible for
the twin image) are dispersed in a uniform background [4, 5].

The concept of using a random phase mask in the optical beam path for improving phase retrieval methods has already been proposed earlier, using a specially manufactured phase plate [6] or a spatial light modulator [7]. In these publications the main advantages of using a random phase plate are described from another point of view, mainly as spreading the image information over a larger frequency bandwidth, and therefore the method is called spread-spectrum phase retrieval. Although this method is closely related to our approach by sharing the common advantages of the diffuse image wave, the optical implementation and the numerical image reconstruction methods are different, e.g. in our approach the image is reconstructed from a single recorded speckle pattern by quasi-interferometric comparison with a previously recorded reference speckle pattern.

For numerical reconstruction of a digital inline hologram the full complex amplitude of the
reference wave has to be known. This is no problem in standard inline holography
using for example plane wave illumination, since the reference wave (being the zero
diffraction order of the transmitted light) just corresponds to the illumination
wave, and thus has a uniform amplitude and constant phase. However, in our more
general case of a speckle illumination field, both amplitude and phase of the
reference wave first have to be determined. This could be done for example by a
preliminary interferometric measurement of the complex amplitude of the undisturbed
speckle pattern before the sample is inserted in the camera plane. This approach
would even work with a standard scattering plate (e.g. ground glass) as phase mask.
However, for demonstration purposes we use a high resolution spatial light
modulator, which can display a pseudo-random (i.e. known) phase pattern in the range
between 0 and 2*π* with a resolution of 8 micron (pixelsize).
From the known phase distribution in the SLM plane, the complex amplitude of the
diffracted field in the camera plane is calculated, which corresponds to the desired
reference wave. In order to map position and size of the numerically calculated
speckle pattern in the camera plane with the actually recorded images, a preliminary
image registration step is required, which, however, can also straightforwardly be
performed by using the programmability of the SLM, as will be shown later. These
calibration steps have to be done only *once* for a given setup, and
from then on imaging can be done in real time, i.e. from each recorded camera frame
the light field in the entire volume between SLM and camera plane can be
reconstructed.

## 2. Experimental approach

Figure 1 summarizes the steps required to record and process a series of images. The upper line (steps 1 and 2) shows the preliminary tasks which have to be done only a single time before (or after) a measurement. The first step is the image registration, where a test hologram displayed at the SLM is sharply reconstructed in the camera plane. Spatial mapping of the experimentally recorded and the theoretically expected images provides a means to map any phase mask displayed at the SLM to a corresponding pixel image recorded by the camera. In step 2 a pseudo random phase mask (with known phase distribution) is displayed at the SLM, and the scattered speckle pattern is recorded by the camera as a reference for further measurements. The phase of each pixel of the recorded image field can be calculated based on the knowledge of the phase mask displayed at the SLM, using the previously obtained image registration information for correctly overlapping the calculated phase with the experimentally recorded speckle image. The next two tasks describe the actual imaging process. In step 3 a sample is placed in the optical path between the SLM (displaying still the same random phase mask as in step 2) and the camera, and an image (or a series of images) of the correspondingly changed speckle pattern is recorded and stored. Finally, numerical data processing (step 4) of the recorded images is carried out by subtracting the intensities of the reference from the image speckle patterns and assigning the calculated phase (step 2) of the reference speckle pattern to the difference. The corresponding complex field corresponds to a hologram of the sample and can be sharply reconstructed in different axial planes by standard numerical back-propagation methods - in our case we use the “spectrum of plane waves method” (with the corresponding propagation operator indicated in the figure).

In more detail, the image reconstruction procedure can be understood as follows: first a pseudo
random mask with a transmission function
*T _{R}*(

*x*,

_{S}*y*) = exp(

_{S}*i*Φ

*(*

_{SLM}*x*,

_{S}*y*)) is displayed in the SLM Plane (

_{S}*x*,

_{s}*y*), consisting of uniformly distributed phase levels in an interval between 0 and 2

_{s}*π*. The correspondingly diffracted speckle image

*R*(

*x,y*) in the camera plane (

*x,y*) is recorded as a reference. Thus

*R*= |

*F*{exp(

*i*Φ

*)}|*

_{SLM}^{2}, where the operator

*F*{...} denotes Fresnel (or Fourier) propagation of the wave field from the SLM to the camera plane.

In the next step we assume that a sample object is inserted in the SLM plane (calculation for
other positions is straightforward by propagating the field with the respective
propagators). Furthermore it is assumed that the complex transmission function of
the sample *O*(*x _{S}*,

*y*) = 1 + Δ

_{S}*O*(

*x*,

_{S}*y*) (with |Δ

_{S}*O*(

*x*,

_{S}*y*)| ≪ 1) is close to unity, i.e. the sample is only a small disturbance for the transmitted speckle field. Then the complete wave field behind the SLM and the object becomes

_{S}*T*=

_{R}O*T*+ Δ

_{R}*O T*= exp(

_{R}*i*Φ

*) + Δ*

_{SLM}*O*exp(

*i*Φ

*). In this case the image intensity*

_{SLM}*S*(

*x,y*) in the camera plane becomes

*S*=

*F*{

*T*+ Δ

_{R}*O T*}

_{R}*F*

^{*}{

*T*+ Δ

_{R}*O T*}, where the “*” symbol means the complex conjugate.

_{R}Using the linearity of the Fourier transform this can be expanded to

The first term in the sum corresponds to the intensity *R* of the undisturbed speckle image, whereas the last term can be neglected, considering that |Δ*O*| ≪ 1. Under this assumption the difference between the two speckle images with and without inserted sample object becomes

Therefore, in order to obtain the desired complex transmission function of the object, we have to calculate:

*F*

^{−1}denotes the inverse Fresnel transform (from the camera plane to the SLM plane). Note that the second term on the right side contains the ratio

*F*{

*T*}/

_{R}*F*

^{*}{

*T*} as a part of the argument of the inverse Fresnel transform, corresponding to a randomly distributed speckle field. Therefore the inverse Fresnel transform of this term (even after multiplication with the additional factor

_{R}*F*

^{*}{Δ

*O T*}) also results in a random speckle field, which is distributed uniformly in the SLM (=object) plane, and can be regarded as “speckle noise” with a total (integrated) intensity which corresponds to the intensity of the reconstructed object. However, since the object is localized and the speckle field is homogeneously distributed across the whole object plane, the object can still be reconstructed with a high signal to noise ratio against a diluted background. Thus one finally obtains that

_{R}The result of this calculation is a complex “dark field” image of the sample
(i.e. only Δ*O* but not *O* is obtained),
where both amplitude and complex phase of the object are reconstructed. This is
possible since the term ${F}^{*}\left\{{T}_{R}\right\}=\sqrt{R}\text{exp}\left(i{\Phi}_{R}\right)$ is known, i.e. *R* is measured as
the intensity distribution of the reference image in the camera plane, and the
corresponding phase Φ* _{R}* is numerically calculated
from the known transmission function of the pseudo-random phase mask

*T*. In the experiment it is advantageous to reduce artifacts due to the division by small absolute values of

_{R}*F*

^{*}{

*T*} by approximating ${F}^{*}\left\{\text{exp}\left(i{\Phi}_{SLM}\right)\right\}\approx \sqrt{\left(R+S\right)/2)}\text{exp}\left(-i{\Phi}_{R}\right)$. In this case one gets approximately:

_{R}We now describe the individual steps taken in a demonstration experiment. The optical arrangement
is shown in Fig. 2: a continuous wave Helium
Neon laser (with 10 mW power at a wavelength of 633 nm, and with a bandwidth on the
order of 1 GHz, lasing in TEM_{00} mode) is used for illumination. The beam
is expanded by a set of lenses (not shown) and illuminates the surface of a
reflective SLM (Holoeye HEO 1080P) with a slightly divergent beam through a
non-polarizing beamsplitter cube. Directly behind the laser the linear beam
polarization direction is optimized for SLM incidence by a half-wave plate (not
shown), such that the SLM acts as an almost pure phase modulator, affecting the
diffracted polarization only negligibly. The SLM has a resolution of 1920 x 1080
pixels, each with a quadratic shape and an edge length of 8 micron. The SLM is
connected with the digital graphics card output of a computer and displays a copy of
the actual computer screen image. The gray values of each pixel at the computer
monitor are converted into refractive index variations of the liquid crystals at the
corresponding SLM pixels, such that 256 (8-bit) phase levels within a range between
0 and 2*π* can be displayed by each SLM pixel. Only a
quadratic region from the center of the SLM surface consisting of 1024 ×
1024 pixels (corresponding to an area of approximately 8 x 8 mm^{2}) is used
in the experiment, the remaining area is shielded by a black card quadratic
aperture. The light diffracted off the SLM surface passes through the beamsplitter
cube and is reflected by a mirror to the chip of a CMOS camera (Canon EOS 1000D) at
a distance of approximately 15 cm from the SLM. The camera chip has a size of 22.2
× 14.8 mm^{2} and a resolution of 3888 × 2592 "colored"
pixels. The distance between the SLM and the camera is chosen such that all first
order diffracted light from the SLM reaches the CMOS chip surface. The CMOS camera
is connected via a USB cable to a computer for remote control. For adjustment, the
camera is operated in video mode, showing a real-time image of the light intensity
at the CMOS chip on the computer monitor. However, for image recording the camera
operates in full resolution mode, recording the speckle images in uncompressed
raw-format. Due to the red laser illumination, only the red channel of the RGB image
data is used for further data processing.

For calibration (step 1 in Fig. 1) a numerically
calculated Fresnel hologram of a test pattern is displayed at the SLM, and the
correspondingly diffracted image is recorded by the camera. An example of such a
recorded test pattern is shown in Fig. 3.
First the on-axis phase hologram of the test pattern containing some number-labeled
cross-lines used for image registration is calculated as a far-field (Fourier)
hologram using an iterative Fourier transform algorithm [8]. Then it is transformed into a Fresnel
hologram which sharply reconstructs at a distance of 15 cm behind the SLM screen by
multiplying the Fourier hologram pixel by pixel with a parabolic lens term, namely
with exp(*iπr*^{2}/*λz*),
where *r* is the radius measured from the center of the SLM,
*λ* is the light wavelength (633 nm) and
*z* is the desired distance (15 cm) where the hologram should be
sharply reconstructed. Due to the offset divergence of the incoming laser beam, the
actual reconstruction distance is slightly larger than the programmed Fresnel
distance of 15 cm, and the camera is positioned in the experimentally determined
sharp image plane. The advantage of the Fresnel setup is that the zeroth diffraction
order of the hologram, i.e. the merely reflected component of the light which
amounts - due to the limited diffraction efficiency of the SLM - still to about
5% of the total image intensity, does not focus to a point in the image
plane (as in a Fourier hologram), but is instead distributed over the CMOS chip
surface [9]. Due to this
“intensity dilution” its intensity is much weaker than that of the
sharply reconstructed image structures and thus it can be neglected during
post-processing of the recorded images.

The main purpose of the test pattern is to map and merge the positions of the experimentally
recorded image with the numerically reconstructed image in the computer. This is
done by propagating the known phase pattern displayed at the SLM with a Fresnel
propagator into the camera plane at a distance of 15 cm, and comparing the position
of this numerically reconstructed image with the experimentally recorded image. In
our data processing software MAT-LAB an interactive algorithm performs the required
procedure of so-called “image registration” by selecting a set of
equivalent test points in the theoretically and experimentally reconstructed images.
From this set of corresponding points a transformation matrix is calculated and
stored. This transformation matrix is then applied to all further experimentally
recorded images and has the effect of adapting their size, orientation and possible
geometric distortion, such that afterwards the position of each experimentally
recorded image pixel exactly corresponds to that of its numerical reconstruction.
Although this procedure maps only the theoretically expected with the experimentally
recorded *intensity* images, one can calculate also the phase of each
image pixel in the camera plane from the numerical reconstruction of the SLM
pattern.

The next step is to display a pseudo-random phase mask at the SLM, with each pixel having a randomly chosen phase in the interval between 0 and 2*π* (step 2 in Fig. 1). In this case the SLM phase mask acts as an almost ideal scatterer which produces a two-dimensional diffuse speckle pattern at the camera. This speckle pattern is then stored as a reference for all further measurements. Note that due to the knowledge of the phase mask displayed at the SLM, also the phase of each pixel in the camera plane can be calculated.

After these preliminary steps, which have to be done only a single time before the measurements,
a small sample object is inserted *somewhere* into the beam path,
while the SLM is still displaying the same pseudo-random phase pattern as before
(step 3 in Fig. 1). The presence of the
sample changes the speckle pattern recorded by the camera.

In a first experiment we placed a “fly” with a size of approximately 2 ×
2 mm^{2} directly at the surface of the SLM. The insect acts as a mixed
amplitude and phase sample, since it contains both transparent (wings) and
absorptive (body) parts.

The numerical reconstruction of the sample (indicated in Fig. 1, step 4) is performed according to Eq. (5). Figure 4(a) and 4(b) show the recorded speckle pattern without and with the sample in the optical path, respectively. The absolute value of the difference between the speckle patterns with (*S*) and without (*R*) included samples (normalized by
$\sqrt{S+R}$) is shown in Fig. 4(c). Note that the difference between the images of (a) and (b) represents an array that contains positive and negative values. This array is then multiplied pixel by pixel with exp(*i*Φ* _{R}*) (which is calculated by numerically propagating the SLM phase pattern into the camera plane - see step 4 in Fig. 1). Afterwards the resulting complex number array is back-propagated into the SLM plane by inverting the Fresnel operation used before for the calculation of the camera image from the SLM phase mask, and divided by the phase term exp(

*i*Φ

*), thus removing the offset phase of the illumination light. The squared absolute value of this back-propagated image corresponds to a sharp intensity image of the sample in the SLM plane, in this case an image of the fly placed on top of the SLM (shown in (d)). The corresponding phase of the calculated complex amplitude (in (e)) corresponds to the phase of the sample object, i.e. it is a quantitative measure for the optical thickness of the object.*

_{SLM}For demonstration that imaging is possible in an extended three dimensional volume, the “fly” was placed on top of the SLM surface, and a second sample, namely an ant, on top of the beamsplitter cube, such that both specimen were in the optical beam path between SLM and camera, with a relative distance of approximately 5 cm. The recorded speckle pattern was processed as described before, and the resulting complex wave field numerically propagated to different axial positions between the SLM and the camera plane.

Figure 5(a) shows the result of the reconstruction in the SLM plane, corresponding to a sharp image of the “fly” which is located there. After numerical refocussing by a distance of 2.5 cm, corresponding approximately to the middle axial position between the two insects, the whole image is blurred (b). Finally, after a further propagation of 2.5 cm the image of the “fly” has completely vanished and a sharp image of the ant placed on top of the beam splitter cube is reconstructed (c). For comparison with standard inline holography the whole imaging sequence was repeated in d-f for plane wave illumination (by displaying just a plane phase front at the SLM). Although non-overlapping image parts of fly and ant also appear sharply in the corresponding focal planes, the overlapping parts of the images disturb each other, such that the two imaged objects cannot be identified. The fact that the diffuse illumination in (a)–(c) allows to discriminate between the two axially separated objects is due to the corresponding increase in the effective numerical aperture of the illumination beam.

A mpg-movie ( Media 1, 3.6 MB) which gives a better impression of the effects of numerical refocussing is enclosed. The movie consists of a series of two-dimensional wave field reconstructions in continuously changing axial planes, starting near the SLM plane and moving to the surface plane of the beamsplitter cube. Note that the whole movie is generated by numerically processing only a single recorded speckle pattern. This suggests to apply the method for the three-dimensional imaging of dynamic processes, since the changing speckle patterns can be recorded at high rates that are practically only limited by the maximal recording speed of the image sensor. The three-dimensional information of the field in the whole optical beam path can be afterwards extracted by numerical post processing.

In order to estimate the transverse and axial resolution of our imaging setup we performed another series of experiments with sample test objects placed on top of the beamsplitter cube. Inserting a metal coated transmissive USAF resolution target at this position results in the reconstructed image shown in Fig. 6(a). The image quality is in this case strongly reduced by speckle noise, which is due to the considerable thickness (2 mm) of the glass target, which changes the phase of the reference speckle pattern in the camera plane with respect to its numerically calculated distribution (which is the basis of our image reconstruction). Additionally the resolution target does not satisfy the condition used in the derivation of Eq. (1) to be only a small disturbance to the illumination wave, which also reduces the image quality. Nevertheless a resolution of at least 63 *μ*m is obtained, and it may be expected that for a better suited sample object this would be improved considerably.

In order to estimate the axial resolution we placed a “crosshair - sample” on top of the beam-splitter cube, consisting of two clamped, crossed human hairs with a relative distance of 400 *μ*m. Figure 6(b) and 6(c) show two reconstructed images of the sample, which are numerically sharply focussed in the planes of the horizontal (b) and vertical (c) hairs. A comparison of the two figures shows that the focal planes can be clearly distinguished, i.e. in (b) the horizontal hair is clearly sharper than the vertical, whereas in (c) the situation is reversed. At a separation of two mm, the peak intensity of the blurred hair is about half of the peak intensity of the one in focus. For thinner objects the difference would be even more distinct, which is why we can estimate the achieved axial resolution to be better than 2 mm.

Theoretically it is expected that the transverse image resolution *d* is limited by the pixel size of the SLM (*p*=8 *μ*m). The theoretical limit *d* ≈ *λ*/2*NA* (the factor 2 arises because both imaging and an illumination NA are approximately equal) is determined by the numerical aperture *NA* of the imaging arrangement, which is given by the distance between camera and SLM (*z*), and the size of the camera chip (*L*), i.e. *NA* ≈ *L*/2*z*. In the initial alignment a test hologram was programmed such that it used the full resolution of the SLM to diffract a test pattern to the camera, which just filled the camera chip. Since the maximal diffraction angle *α* of the SLM is limited by its minimal grating constant, corresponding to two pixel diameters *p*, namely sin(*α*) = *λ*/2*p*, the maximal image size in the camera plane, which corresponds also to the size of the camera chip *L*, is given by *L* = 2*z*sin(*α*) = *zλ*/*p*. Comparing this with the resolution limit *d*, we find that *d* ≈ *p*, as expected. Note that this does not change considerably when shifting the sample to another axial position, since a shift which, e.g., increases the imaging NA, simultaneously decreases the illumination NA, such that the total NA, given by the sum of the two, remains approximately equal (if the sample is in the middle between the SLM and the camera).

The axial resolution is expected to be on the order of the Rayleigh range of the setup, given by (4*λ*/*NA*^{2} ≈ 0.6 mm), which is close to our experimentally estimated value of <2mm.

The sharply reconstructed images are surrounded by a speckled background due to the simplified numerical reconstruction process, which corresponds to a numerical hologram reconstruction, and thus produces in principle an undesired phase conjugate diffraction order (the twin-image) with the same intensity as the reconstructed image. However, the image intensity which is in standard inline holography localised in the twin-image is in our case distributed diffusely in the whole image plane as a diluted background speckle pattern. In principle, this can be avoided by more extensive numerical processing with so-called phase retrieval methods [10–12], which can iteratively calculate a complex wave field from its known (or measured) amplitude and/or phase distributions in two different transverse planes - the so called boundary conditions. This seems to be very well suited to our situation, since the known phase distribution of the pseudo-random phase mask corresponds to a boundary condition with a very high and detailed information content, which should allow an accurate and fast convergence of the phase retrieval algorithm. Thus, our straightforward holographic reconstruction may be used as a first iteration step of a more extensive phase retrieval algorithm, which might enable background-free quantitative reconstruction of the samples. Nevertheless our more simple quasi-holographic reconstruction method has the advantage that its numerical effort consists only in a single two-dimensional Fourier transform which can be calculated in real time (at video rate) - as compared to the elaborate phase retrieval methods which often use hundreds of iteration steps.

## 3. Conclusion

The demonstrated method of optical imaging without using imaging optics can be advantageous due to its simplicity. In our approach it uses a low-cost consumer camera to produce images of complex samples in a widely extended volume, without geometric image distortions which usually derive from lens errors. After a first calibration, consisting of image registration with a test pattern and recording of a reference speckle pattern without the sample, the method works at rates that are practically only limited by the acquisition speed of the image sensor, and each recorded frame contains the full information of the wave field in the volume between the diffusing mask and the camera, which can be recovered afterwards by numerical image processing. The currently used SLM makes the experiment still expensive, but in principle it can be replaced by a standard diffuser with a known phase profile, which can either be manufactured with the methods of diffractive optics, or can even be a standard (e.g. ground glass) diffuser which is once measured interferometrically before it is employed for imaging.

Compared to on-axis holography with plane wave illumination, the diffuse illumination has the advantage of avoiding the twin-image problem, i.e. it produces no sharply imaged diffraction orders besides the desired image field. Furthermore, it approximately doubles the effective numerical aperture, which results in an increased axial resolution that allows independent imaging in different axial planes.

The method can also be adapted for the real-time detection of dynamic changes happening between the recording of two adjacent camera frames - this might be achieved by using each preceding camera frame as a reference for the next (instead of recording a first reference image without inserted sample). In this case the described processing will just image the differences between two adjacent images, for example highlight the moving boundaries of a dynamic object.

## Acknowledgments

This work was supported by the Austrian Science Foundation (FWF) Project No. P19582-N20.

## References and links

**1. **D. Gabor, “A new microscopic principle,” Nature **161**, 777–778 (1948). [CrossRef] [PubMed]

**2. **D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. London Ser. A **197**, 454–487 (1949). [CrossRef]

**3. **I. Moon, M. Daneshpanah, A. Anand, and B. Javidi, “Cell identification with computational 3-D holographic microscopy,” Opt. Photon. News **22**, 18–23 (2011). [CrossRef]

**4. **T. Nomura and M. Imbe, “Single-exposure phase-shifting digital holography using a random-phase reference wave,” Opt. Lett. **35**, 2281–2283 (2010). [CrossRef] [PubMed]

**5. **C. Maurer, A. Schwaighofer, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Suppression of undesired diffraction orders of binary phase holograms,” Appl. Opt. **47**, 3994–3998 (2008). [CrossRef] [PubMed]

**6. **F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A **75**, 043805 (2007). [CrossRef]

**7. **C. Kohler, F. Zhang, and W. Osten, “Characterization of a spatial light modulator and its application in phase retrieval,” Appl. Opt. **48**, 4003–4008 (2009). [CrossRef] [PubMed]

**8. **An explanation of iterative Fourier transform algorithms can be found for example in: B. C. Kress and P. Meyrueis (Eds.) “*Digital Diffractive Optics*,” 1st ed. (John Wiley & Sons, 2000) ISBN-13: 978-0-471-98447-4.

**9. **A. Jesacher, S. Fürhapter, S. Bernet, and M. Ritsch-Marte, “Diffractive optical tweezers in the Fresnel regime,” Opt. Express **12**, 2243–2250 (2004). [CrossRef] [PubMed]

**10. **J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. **3**, 27–29 (1978). [CrossRef] [PubMed]

**11. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**12. **J. N. Cederquist, J. R. Fienup, J. C. Marron, and R. G. Paxman, “Phase retrival from experimental far-field speckle data,” Opt. Lett. **13**, 619–621 (1988). [CrossRef] [PubMed]