## Abstract

Phase-imaging techniques extract the optical path length information of a scene, whereas wavefront sensors provide the shape of an optical wavefront. Since these two applications have different technical requirements, they have developed their own specific technologies. Here we show how to perform phase imaging combining wavefront sampling using a reconfigurable spatial light modulator with a beam position detector. The result is a time-multiplexed detection scheme, capable of being shortened considerably by compressive sensing. This robust referenceless method does not require the phase-unwrapping algorithms demanded by conventional interferometry, and its lenslet-free nature removes trade-offs usually found in Shack–Hartmann sensors.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. INTRODUCTION

Even though the physical nature of light has been fully understood for more than a century, there are still no available detectors capable of directly imaging both the amplitude and phase of a wavefront [Fig. 1(a)]. Information about those two quantities is of capital interest when trying to perform biomedical imaging [1,2], aberration measurement and correction in visual optics [3,4], and three-dimensional imaging [5], among other applications. The limitation in performing optical measurements ultimately arises from the extremely fast oscillations of the optical fields, which current electronics are unable to resolve. As detectors only capture light irradiance, several approaches have been proposed over time to tackle the phase problem. Historically, Gabor suggested in 1949 the first quantitative technique, which used interferometric information to recover the complex optical field [6], establishing the basis of modern holography and paving the way for applications such as quantitative phase microscopy [7]. In parallel, phase information plays a fundamental role in adaptive optics, a technique that intends to measure and correct optical aberrations in real time [8]. Although adaptive optics was initially conceived for circumventing atmospheric turbulences in astronomy, its simple operation principle has found applications in other areas, such as visual optics [3,4], microscopy [9,10], and biomedical imaging [11,12].

Two main groups of techniques have emerged to tackle the problem of recovering the complex amplitude of the light field. The first one includes interferometric approaches, which measure the interference between light coming from the object and a reference beam [Fig. 1(b)]. Although they are extremely powerful for conducting precise phase measurements, their high sensitivity to environmental perturbations (such as mechanical vibrations and changes in temperature) and the need of a reference beam (not always attainable) hinder the implementation of portable and compact interferometric imaging systems in many applications. As an alternative, a second group of techniques has emerged, whose objective is to recover the same information without the need of a reference beam. These non-interferometric approaches rely on several assumptions about the object beam and use mathematical algorithms to infer the wavefront information [13]. Examples of these kinds of techniques are Fourier ptychography [14], coherent diffractive imaging [15,16], phase imaging based on the transport-of-intensity equation [17], and phase imaging with randomized illumination [18]. By eliminating the need for a reference beam, simpler and more robust devices can be designed. Furthermore, computational approaches also enable us to extend the physical capabilities of imaging systems, providing increased field of view [19], optical sectioning [20], or superresolution [21,22]. However, the recovery algorithms used in those techniques usually entail high post-processing times or multiple data acquisitions to recover one image.

Among the non-interferometric techniques, Shack–Hartmann (SH) wavefront sensors [23] are currently the most frequently employed for measuring optical aberrations. SH wavefront sensors combine a lenslet array with a pixelated detector, like a charge-coupled device (CCD) or a CMOS camera. By placing the detector at the focal plane of the lenslet array, the positions of the foci generated by each lenslet can be linked to the phase of the wavefront in each region [Fig. 1(c)]. In other words, each lenslet samples the phase information of a region of the wavefront. This method provides a very compact system that is easy to use and align and that requires no complex post-processing stages. For these reasons, SH sensors are used in manifold scientific fields [4,24–27]. However, the number of lenslets, their focal length, and their diameter limit the spatial resolution, dynamic range, and sensibility of SH wavefront sensing. Although multiple solutions have been proposed to manage the several trade-offs between the above magnitudes [28–36], manufacturing processes still place a boundary on the size and curvature of available lenslets, constraining the attainable spatial resolution of commercial SH sensors. Quadriwave lateral shearing interferometry [37], also known as a modified Hartmann mask technique, makes it possible to increase the SH spatial resolution by a factor four [38]. Alternatively, phase imaging can be performed using pyramid wavefront sensors, which employ a four-sided refractive pyramid to split the Fourier plane into four parts, each one generating a phase gradient image on a pixelated detector [39]. Further developments replace the pyramid by a quadri-foil lens [40] or a liquid crystal display [41]. Despite the benefits of this approach in terms of spatial resolution, the illumination numerical aperture determines the detectable dynamic range, so in practice only relatively smooth phase gradients can be precisely recorded [42].

Here, we perform phase imaging using an alternative non-interferometric approach to measure the complex amplitude of a wavefront. We overcome the inherent limitations in spatial resolution, optical efficiency, and dynamic range that are found in SH wavefront sensing. We sample the wavefront by using a set of binary amplitude masks generated by a high-speed SLM. A single focusing lens forms a time-dependent light distribution on its focal plane, where an irradiance detector is placed. Measuring the changes of the total irradiance and the centroid position of that distribution, both the amplitude and phase of the wavefront can be recovered [Fig. 1(d)]. One advantage of this time-multiplexed detection scheme is that it can be implemented using compressive sensing (CS), allowing us to go beyond the Shannon–Nyquist condition, with the subsequent reduction in acquisition time [43,44]. Unlike previous phase-retrieval methods based on SH wavefront sensors or wavefront modulation [45–49], our approach is lenslet-free and relies neither on any kind of iterative algorithm nor interferometric measurements. Moreover, the use of phase-unwrapping algorithms is not required, such as those employed in some interferometric techniques due to their limited phase unambiguity range [50]. As a proof of concept, we perform aberration sensing as well as phase imaging of a low-contrast sample with variable optical thickness. The corresponding results are compared, respectively, with Shack–Hartmann wavefront sensing and phase-shifting interferometry.

## 2. OPERATION PRINCIPLE

The basic idea of our technique is to use the relationship between the amplitude and phase of a wavefront and the statistical moments of its Fourier irradiance distribution. This idea is also at the heart of current SH sensors, as they sample a wavefront by means of an array of lenslets and measure the centroid position (the first-order statistical moment) of the Fourier distribution created by each lenslet, allowing one to derive the local slope of the wavefront. By placing a pixelated detector (typically a CCD) at the focal plane of the lenslet array, SH sensors fully map the local slopes of the wavefront. After that measurement, numerical integration provides the phase of the wavefront at each lenslet position [51] [see Fig. 2(a)].

Several tradeoffs arise from the SH configuration. First, the spatial resolution of the measurement is determined by the total number of lenslets and their size. In order to increase the spatial resolution, arrays with higher density of lenslets can be built. However, reducing the size of each lenslet also decreases the amount of light at each region of the detector. If the signal-to-noise ratio of the measurement is low, the accuracy of the centroid position detection is greatly reduced, which hinders a reliable wavefront reconstruction. Even if the detector can work in low-light-level scenarios, building large arrays of lenslets with a diameter below 100 μm and an accurate curvature is technologically challenging [52]. Moreover, centroid displacement and wavefront slope are related via the lenslet focal length, so, for a given slope, the higher the focal length is, the larger the centroid displacements that are measured. This brings about a second trade-off, since a strongly aberrated wavefront may produce centroid displacements larger than the size of the detection area allocated to every lenslet on the sensor. The result is crosstalk between nearby spots that produces errors in the reconstructed phase, limiting the attainable dynamic range of the sensor. This problem can be circumvented (without sacrificing sensitivity) by a number of techniques that track the real spot location or infer it by using computational techniques [29]. Instead of trying to expand the dynamic range of a sensor by increasing its hardware and/or software complexity, an alternative approach is to implement a reconfigurable (adaptive) SH sensor that includes an array of diffractive lenslets programmed onto a SLM [28,33]. The lenslet characteristics can be then chosen to match the requirements of a specific application optimally, albeit the tradeoff between sensitivity and dynamic range still remains. Another aspect of SH sensors that is usually not considered is the fact that the total amount of light arriving at each detector region provides a measurement of the light power at the corresponding lenslet position. Using that information, one can map the irradiance of the light coming from an object. However, due to the relatively poor spatial resolution of SH sensors (usually around a thousand lenslets), they barely can compete with interferometric systems to measure the complex amplitude of an object with high spatial frequency content.

Another approach for measuring the local slope of a wavefront is the use of a small scanning aperture and a single lens rather than an array of lenslets [Fig. 2(b)]. For each position of the aperture, only light from a region of the wavefront will be focused by the lens. This light will generate a focal spot on the Fourier plane of the lens where the detector is placed. For a small enough aperture, the wavefront can be locally approximated by a plane wave, and it can be demonstrated (see Methods) that the position of the centroid of the distribution is related to the gradient of the phase inside the aperture by the equation

Our technique uses 2D structured illumination patterns as an alternative to a small aperture. As can be seen in Fig. 2(c), instead of just selecting a small region of the wavefront, a time-variable mask (a sequence of programmed patterns) simultaneously selects several regions of the wavefront. This procedure resembles those used in structured illumination microscopy or in single-pixel imaging [44,53,54]. Different single-pixel approaches have been recently proposed to obtain phase information using two-beam [55,56], and common path [57] interferometry. However, our approach is non-interferometric and uses a simpler geometric formulation similar to the one found in Shack–Hartmann sensors. Also, it can be extended to work with partially coherent light, and the detector in use makes it relatively easy to operate in different regions, such as infrared or terahertz (THz). It must be noted that, as in single-pixel imaging, the technique presented here can work in two different configurations: the object (or wavefront) under study can be imaged onto the SLM, or the codified patterns can be projected onto the object plane [58–61]. The key factor that has to be accomplished is that both the mask and the object wavefront overlap in one plane. Even though the irradiance distribution generated by the focusing lens results from the contribution of light coming from different areas of the sampling plane, the amplitude and phase information can be still retrieved, provided that the sequence of sampling patterns is known (see Methods). Now, the amount of light arriving at the detector for every pattern has been greatly increased in comparison to the scanning-aperture scheme. This fact is commonly known as the multiplex or Fellgett’s advantage, and has been extensively exploited to increase the signal-to-noise ratio of optical measurements over the last decades [54,62–64]. In our proposal, the set of masks used are a shifted and rescaled version of the Walsh–Hadamard functions [65], since this election provides several benefits. First, Walsh–Hadamard functions are binary, so they are a good choice when working with binary amplitude modulators, as is the case of digital micromirror devices (DMDs), which reach very high refresh rates (around 22 kHz). Second, the sampling patterns have the same number of absorbing and transmitting (or reflecting) areas, no matter the size of those areas. This fact is important, because once the window size on the modulator has been fixed, increasing or decreasing the spatial resolution of the masks does not change the total amount of light transmitted or reflected by the device. Then, irrespective of the chosen spatial resolution, each measurement always works with the same number of incident photons, unlike the wavefront sensing approaches described above. Furthermore, since in every measurement only a single spatial light distribution is detected, the crosstalk errors that limit the dynamic range in a SH sensor are not present here. However, there is also a drawback. When operating with Hadamard functions, the number of photons used per measurement decreases by a factor two, since only half of the DMD mirrors are used to illuminate the sample. Nevertheless, this approach makes it possible to use compressive sensing measurements, considerably increasing the acquisition speed.

In order to recover the amplitude and phase information of a wavefront, our method requires the measurement of the zero- and first-order statistical moments of the spatial distribution of light, i.e., its total irradiance and the position of its centroid. The classical approach to measure those quantities is to place a pixelated sensor in the Fourier plane of the lens to acquire a digital image, and then calculate them computationally. However, other experimental approaches can be explored. Here we have chosen a lateral position detector instead of a camera. The benefit of using this detector lies in its capability to provide the information about the power of the beam and the position of its centroid at speeds in the order of several kilohertz (kHz) without the need of computational procedures. Despite digital cameras offering better sensitivity and accuracy in the detection of the centroid, we have opted for a simpler design to demonstrate the feasibility of our approach.

## 3. RESULTS

#### A. Experimental Verification

To demonstrate our method, we present the experimental device shown in Fig. 3(a). Light coming from a laser (Oxxius slim-532) emitting at 532 nm is collimated with a lens (${\mathrm{L}}_{1}$) and impinges onto a DMD (DLP Discovery 4100 V-7000 from ViALUX). By using a $4f$ system formed by lenses ${\mathrm{L}}_{2}$ and ${\mathrm{L}}_{3}$, light is projected onto an object, which modifies the wavefront phase. After going through it, light passes through a condensing lens (CL), with a focal length of 150 mm. In its Fourier plane, the lateral position detector (Thorlabs PDP90A) measures the irradiance of the beam and the position of its centroid. To retrieve the amplitude and phase information of the object, a full set of Hadamard patterns is projected. The size of this set depends on the pixel size of the image one wants to obtain. Then, for an $N\times N$ image, ${N}^{2}$ patterns must be sent (see Methods). This sequential acquisition represents a limitation of our technique. Whereas a SH sensor captures all the information in a single shot, our technique relies on the sequential sampling of the wavefront to work.

As a first example, we show the results for a photoresist plate representing a typical coma aberration [Figs. 3(c) and 3(d)]. In this case, the spatial resolution was set to $64\times 64\text{\hspace{0.17em}}\mathrm{pixels}$ with a pixel pitch of 82.08 μm. For this size, the total acquisition time was roughly 1 s. This is not caused by the refresh rate of our DMD (that would need roughly 200 ms to project ${64}^{2}$ patterns), but by the limited bandwidth of our detector [Fig. 3(b)]. Faster lateral position sensing detectors, which are commercially available, would speed up the acquisition process. A comparison of this result with those obtained by other wavefront sensing techniques (using a SH sensor and through an interferometric measurement) can be found in Supplement 1.

#### B. Comparison with Shack–Hartmann Wavefront Sensing

To test the capabilities of the proposed technique, we have compared it with a commercial SH wavefront sensor (Thorlabs WFS150-5C) in several scenarios. As a first object, we employ a spherical lens [Fig. 4(a)] with a focal length of 500 mm aligned with the optical axis. This simple choice facilitates the comparison of both methods quantitatively, since we expect to recover the phase of a spherical wavefront. The object plane is imaged onto the lenslet array plane of the SH wavefront sensor by using a $4f$ system. The focal spots produced by the lenslet array can be seen in Fig. 4(b). Using the displacements of each focal spot with respect to a reference position (previously determined by a calibration measurement), the 3D view of the wavefront can be recovered by direct integration. To ease the visualization, we also present a wrapped view of the wavefront phase [Fig. 4(c)].

Using the optical setup shown in Fig. 3(a), we reconstruct the wavefront phase by using spatial wavefront sampling. A small region of the full measured $64\times 64$ map of centroid displacements is shown in Fig. 4(d). The modulo-$2\pi $ phase reconstructed from those displacements can be observed in Fig. 4(e). The total phase change over the aperture is $9.67\lambda $, a value that differs in $0.06\lambda $ from that obtained using the SH sensor.

As a second experiment, we move the lens away from the optical axis, thus producing a larger phase gradient. Given the technical specifications of our SH sensor, the maximum measurable total phase change over the aperture of the sensor is around $100\lambda $. When the phase gradient introduced by the off-axis lens produces a higher phase variation, the focal spots move so much in one direction that crosstalk appears and the sensor is not able to provide reliable results.

In the top row of Fig. 5, we show the result obtained by using the SH sensor at the limit of its dynamic range (here we show a wrapped view of the phase to ease visualization). This limit can be considerably overcome using the setup shown in Fig. 3(a). As can be seen in the bottom row of Fig. 5, spatial wavefront sampling allows us to measure a total gradient phase of $217\lambda $. The above improvement in the dynamic range is due to the relative increase in the size of the detector area attained by our technique. In a SH sensor, the detector is divided into a number of regions equal to the number of lenslets in the array. Each region, containing a small number of pixels, is used to map the position of each focal spot and sets a limit to the maximum spot displacement. As was mentioned before, walking off beyond that limit causes several focal spots to share the same region of the detector, leading to significant errors in the reconstructed phase. Since in our technique we sequentially detect the light distribution as a whole, the full size of the sensor can be used in every consecutive measurement. As a result, the crosstalk penalty typically found in a SH sensor is not present anymore. In the same way as in the design stage of SH sensors, fine tuning of the focal length of the condenser lens used in our setup, as well as a proper selection of the size of the detector, will provide bigger or smaller dynamic ranges and sensitivities in the phase detection.

#### C. Use of Compressive Sensing

Given the similarities between our technique and single-pixel imaging, some advanced undersampling approaches can be used to improve both the acquisition speed and the light efficiency of our method.

CS provides a recovery framework for signals sampled below the limit imposed by the Shannon–Nyquist theorem [43]. For a sequential signal sampling, this implies a reduction in measurement time. Once the signal has been sampled, an optimization algorithm provides an estimation of the original signal. This estimation is usually based on sparsity constraints and provides a high-fidelity reconstruction (see Methods).

In our experiments, we have used standard CS algorithms (${l}_{1}$- magic package [66]) to recover the coma aberration presented in Fig. 3. The results are shown in Fig. 6.

As can be seen, for relatively smooth aberrations, estimations with fidelity higher than 90% can be recovered by using 10% of the total number of measurements established by the Shannon–Nyquist criterion. Figure 6 includes two insets that show individual reconstructions for compression ratios corresponding to correlation coefficients over 0.9 and 0.99, respectively (points A and B in the fidelity plot). As in our commercial Shack–Hartmann sensor, the recovered wavefronts are expressed in the Zernike basis, and high-resolution images are generated from that expansion (see Supplement 1 for details). By taking this into account, high-resolution images with acquisition times of 20 ms could be achieved operating at the full frame rate of the DMD ($\sim 20\text{\hspace{0.17em}}\mathrm{kHz}$), allowing us to perform real-time aberrometry. This could be very useful in ophthalmic scenarios, where the patient needs to stand still while the eye aberrations are measured. Also, using patterned illumination could be beneficial in biological scenarios, where photodamage thresholds can limit the amount of light that can be used at each region of the sample when working on a reflection configuration [67,68]. By using this CS-based procedure, one would be able to increase the measured signal level without causing photodamage.

#### D. Complex Amplitude Retrieval and Comparison with Phase-Shifting Interferometry

Our technique offers the possibility of reconstructing not only the phase distribution of a wavefront coming from an object but also its amplitude. For the set of illumination patterns, the light power measured by the detector provides the projections of the wavefront amplitude into the basis of Hadamard functions. By measuring all the successive projections, the object amplitude image can be recovered offline, following the operation principle of single-pixel imaging [44] (see Methods). A simple example with an object composed of an amplitude mask attached to the lens used in the previous section can be seen in Supplement 1.

In principle, the complex amplitude of a wavefront can be retrieved by means of a SH sensor, but with a relatively poor spatial resolution (given by the lenslet diameter and the lens fill factor). As a consequence, spatial features smaller than a few hundreds of micrometers are lost in the recovery process. Even though high-resolution SH sensors are being developed and can be used to perform phase imaging, the trade-offs between dynamic range and sensitivity are still present [69]. In our technique, the limit in the spatial resolution is ultimately given by the modulator pixel size (typically $\sim 10\text{\hspace{0.17em}}\mathrm{\mu m}$) and the magnification of the projecting system. In our setup, this supposes an increment in the resolution of one order of magnitude with respect to a typical SH sensor.

To demonstrate this improvement, we use as a sample a thin layer of a photoresist material. We placed the photoresist over a transparent plate, creating an object with different regions with and without material, as can be seen in Fig. 7(a). Due to the difference of refractive index between the photoresist and the air, an impinging wavefront acquires a spatial phase distribution when it passes through the object. By using the setup shown in Fig. 3(a), the amplitude and phase information of the sample are recovered [Fig. 7(b)]. A 3D visualization of the phase information is shown in Fig. 7(c). The images have a spatial resolution of $128\times 128\text{\hspace{0.17em}}\text{pixels}$ with a pixel pitch of 41.04 μm. Since both the photoresist and the transparent plate present a similar absorption in the green region of the spectra, the amplitude image presents low quality. However, fine details of the sample are recovered in the phase image, such as the small holes (with a diameter of around 80 μm) present in the photoresist stripes produced by air bubbles in the manufacturing process. These small spatial features cannot be retrieved by our SH sensor, as the diameter of each lenslet is 150 μm. Additionally, due to the low number of lenslets $(39\times 31=1209)$, the SH sensor would only provide an image clearly “pixelated.” An additional recovery of the same object, using CS techniques, can be seen in Supplement 1.

In order to check the validity of our results, we resort to phase-shifting interferometry. This technique uses a pixelated detector, which makes it possible to exploit the high spatial resolution of a commercial CCD (in our experiment, the pixel size is 6.5 μm). The recovered phase is shown in Fig. 7(d). Unlike our wavefront sensing technique, phase-shifting interferometry only provides modulo-2π mappings [see the vertical scale in Fig. 6(d)], requiring the use of unwrapping algorithms to reconstruct a continuous phase like the one shown in Fig. 7(b). We focus our attention on the small region between the two air bubbles in one of the photoresist stripes. After zooming in, it can be seen that the phase difference between the top of the strip and the substrate is 22.7 radians. In our measurements with structured illumination, this difference is 23 radians, showing a very good agreement with the previous value. We also performed a thickness measurement of the region of interest with a mechanical profilometer (Dektak 6, Veeco). By using the strip height provided by the profilometer (3.8 μm), the refractive index of the photoresist layer can be estimated from the optical path length, $(L=\lambda \mathrm{\Delta}\varphi /2\pi \mathrm{\Delta}n)$. In both phase-shifting holography and our technique, the index estimation is 1.51, which is in good agreement with the nominal value of the photoresist material.

#### E. Discussion

We have introduced a wavefront sensor based on patterned illumination produced by an amplitude SLM. Instead of resorting to an array of lenslets, as in SH wavefront sensing, the spatial information is captured by illuminating the object with binary amplitude masks generated with a DMD. The sensor is a simple position-sensitive photodetector. This provides several benefits. First, we eliminate the spatial resolution constraints of traditional SH wavefront sensors. Second, by using the full size of the detector in each measurement, the dynamic range is greatly extended and the crosstalk problem is eliminated. Last, the use of Hadamard functions makes the method robust when changing the spatial dimension (number of pixels) to be obtained. The total number of photons used by the system will be determined only by the window size used on the DMD. This is due to the fact that Hadamard patterns always have half of their pixels on each binary value, regardless of the function dimension. Then, increasing or decreasing their spatial dimension does not necessarily change the total bright area displayed by the DMD. This is not the case when working with SH wavefront sensing, as increasing spatial resolution entails decreasing the size of the lenslets, and this reduces the number of photons arriving to each part of the sensor. This, in combination with the multiplex advantage, makes us think that the technique is well suited to work at low-light-level scenarios, where similar approaches have already been proposed to obtain amplitude information [70]. Additionally, we have exploited the high spatial resolution offered by commercial DMDs to perform phase imaging with a pixel size of tens of micrometers. To this end, we have imaged a photoresist sample and calculated its local refraction index. These results have been demonstrated to be comparable with those provided by phase-shifting interferometry, a well-established interferometric technique.

A distinctive feature of our technique is the fact that the spatial resolution is fixed by the SLM and the projecting optics, while the other optical elements included in the sensor, the focusing lens and the light detector, set the attainable sensitivity and dynamic range. By calculating several statistical moments of the light distribution at the detector plane (its irradiance and the position of its centroid), the method provides both amplitude and phase information. This enables the technique to produce results that are closer to the conventional phase-imaging approaches, but still retaining some of the benefits of the SH approach. These benefits are the referenceless nature of SH wavefront sensing, and the fact that the recovery process does not require phase-unwrapping or iterative algorithms. In addition, the easy implementation of our system also offers the possibility of being used as an add-on module of a conventional microscope [20,71], thus allowing us to carry out quantitative phase imaging with sub-micrometric spatial resolutions.

Due to the temporal multiplexing nature of the technique, there is a trade-off between spatial resolution and image acquisition time. Here, we have shown results with resolutions between $64\times 64$ and $128\times 128\text{\hspace{0.17em}}\text{pixels}$, which take seconds to be acquired. Using faster, already available detectors will allow image capturing at video frame rates. However, the greater the resolution, the slower the acquisition will be. This drawback can be considerably alleviated using CS [72] or adaptive approaches [73,74]. The feasibility of applying CS has been shown in Section 3, where a typical ocular aberration has been reconstructed with high compression ratios. In particular, acquisitions times of around 20 ms could be achieved operating at the full frame rate of the DMD, which would open up the door to perform real-time aberrometry with our system. Last, given the technological challenge of manufacturing both arrays of lenslets and pixelated sensors that work outside the visible spectrum, the technique proposed here is a good candidate to operate in regions such as IR and THz, where similar approaches have already been used to obtain amplitude images [54,75].

## 4. METHODS

#### A. Phase Measurement from Centroid Position

Let us assume a light distribution, at position $z=0$, with the form: ${U}_{1}(x,y)=A(x,y){e}^{i\varphi (x,y)}$. If a thin lens, with a focal length $f$, is placed at the position $z=f$, the complex amplitude distribution at the position $z=2f$, using Fresnel propagation, will be ${U}_{2}(x,y)=\frac{1}{j\lambda f}{\tilde{U}}_{1}(\frac{x}{\lambda f},\frac{y}{\lambda f})$, where ${\tilde{U}}_{1}$ denotes the Fourier transform of ${U}_{1}$. The irradiance of this light distribution will be given by ${I}_{2}={|{U}_{2}(x,y)|}^{2}=\frac{1}{{(\lambda f)}^{2}}{|{\tilde{U}}_{1}(\frac{x}{\lambda f},\frac{y}{\lambda f})|}^{2}$. Our technique is based on the calculus of several statistical moments of ${I}_{2}$. In particular, we will use the energy, $S$, and the centroid position, $\overrightarrow{\mathrm{\Delta}}=(\mathrm{\Delta}x;\mathrm{\Delta}y)$. Those quantities will be given by

By using Parseval’s theorem, it is easy to prove that $S={|{U}_{2}(x,y)|}^{2}={|{U}_{1}(x,y)|}^{2}$, and the first measurement provides the energy of the wavefront. Using the Moment’s theorem of Fourier transformations and some algebra (see Supplement 1), it can be proved that the position of the centroid at the measurement plane and the phase of the object at the origin position are related by

For a small square aperture with lateral size $L$, placed at position $(a,b)$, the amplitude of the field can be described as

where $K$ is a constant. Introducing this into Eq. (4), we getIf the square aperture is small enough, the gradients of the phase can be considered constant over the integration domain. Then, the centroid positions results,

It is clear that, for a small region of the wavefront, the centroid of the resulting distribution provides information about the gradient of the phase in that region. If the small square aperture is displaced, one can measure the correspondent centroid positions and then estimate the gradient map of the wavefront. After that, numerical integration provides the wavefront at the original position. If multiple square masks are used at the same time, it is also possible to relate the measurements with the phase at each one of the mask positions. Then, it is possible to understand Hadamard illumination as a particular case of this procedure, and the phase recovery can be performed while illuminating the full scene with a set of Hadamard patterns (see Supplement 1).

#### B. Object Recovery by Multiplexing with Hadamard Patterns

Every measurement can be mathematically described by the equation

where the object to be measured is described by the object vector $\overrightarrow{x}$, the measurements made are represented by the vector $\overrightarrow{y}$, and the measurement process is carried out by the sensing operator, represented by the matrix $M$. Once the measurements have been done, one just needs to solve the algebraic problem presented by Eq. (9) to recover the object. The structure of the sensing operator will be determined by both the nature of the object under study and the system used to recover an image. For a traditional camera setup, matrix $M$ is just the identity matrix, and each element of $\overrightarrow{x}$ is proportional to the energy of a given area of the object. By placing a detector at the image plane, the energies of each zone are measured and the object is recovered. Usually this is done with a sensor array, like a CCD camera, and all the measurements are done at the same time. However, there are scenarios, such as confocal microscopy, where this process is conducted by a bucket detector, usually a photomultiplier tube. In this case, the measurement process is sequential. It is possible to exploit Eq. (9) to increase the capabilities of an optical setup. In our approach, instead of just using the identity matrix as the sensing operator, each row of the matrix $M$ contains a Hadamard function. Now, the measurement process consists of making the superposition between the object and each one of the Hadamard functions. After that, the energy of that superposition is measured by the detector. This process can be performed by projecting the functions onto the object with a SLM, as is done in our phase-imaging system. When using this approach, each measurement contains light coming from several regions of the object. This increases the amount of light at each measurement, and thus the signal-to-noise ratio. As Hadamard functions conform an orthonormal basis, Eq. (9) can be easily solved.Once this way of measuring has been introduced, it is easy to see that it is possible to recover an image of different physical parameters, provided that the measurement process can be described by Eq. (9). It can be noted that our centroid measurements can be arranged in matrix form as

Here, $\overrightarrow{\mathrm{\Delta}}$ is the vector containing the measured positions of the centroids, and now the object is the spatial gradient distribution, expressed as a vector, $\overrightarrow{\overrightarrow{\nabla}\varphi}$ (each element of this vector is a gradient for a given illumination pattern). Again, in our experiments we used the Hadamard functions as rows of the sensing matrix. For each Hadamard function projected onto the object, the energy of the distribution and the position of its centroid are measured with the position sensing detector. Then, Eq. (10) is solved and the gradient of the phase is obtained. After that, numerical integration provides the phase of the wavefront, $\varphi (x,y)$.

#### C. Compressive Sensing

Solving the problem given by Eq. (9) is easy when sampling the object at the Shannon–Nyquist ratio. However, for $k<N$ measurements, the equation system presents infinite solutions. The fundamental idea of CS is to use the fact that most of the signals found in nature are sparse in some basis of functions, i.e., they have a representation in some basis where most of the expansion coefficients are zero, so only a small number of them contain relevant information. This fact can be used to provide a solution for the above-mentioned underdetermined equation system. We can express the measurements as $\overrightarrow{y}=M\overrightarrow{x}=M\mathrm{\Psi}\overrightarrow{\alpha}=\mathrm{\varphi}\overrightarrow{\alpha}$, where we have represented the object in a given basis of functions $(\overrightarrow{x}=\mathrm{\Psi}\overrightarrow{\alpha})$. Then, CS algorithms provide a solution to the ${l}_{1}$-norm minimization problem,

In order to find a good solution, some premises need to be fulfilled [76]. First, the number of measurements cannot be arbitrarily low. Depending on the sparsity of the object (which depends on the chosen recovery basis), a higher or lower number of measurements is needed to obtain good results. Second, the basis pair (measurement and recovery) need to be incoherent. In practice, it is usually easy to find pairs of basis that fulfill this principle. Usually, the measurement basis is chosen for its ease of generation in a SLM, and once one has been picked up, the user selects a recovery basis that fulfills the incoherence constraint and where the object is sparse. Last, a sensing strategy needs to be defined. Initially, it was proposed that randomly choosing a subset of elements in the measurement basis was enough to obtain good results [43]. Soon after that, other strategies were defined to reduce the number of measurements maintaining the image quality. For example, it is known that natural scenes tend to have a power spectrum centered around low frequencies, so mixed sampling strategies have been used with very good results [63].

In our experiments, $M$ represents the shifted and rescaled Hadamard basis (the elements of which are either 0 or 1), easily generated on a DMD, and $\mathrm{\Psi}$ is chosen to be the Haar wavelet basis, where most images tend to be sparse. The measurements have been performed following the mixed approach: fixing the number of samples, $k$, we chose the lowest frequency $k/2$ Hadamard patterns, and then we select $k/2$ random Hadamard patterns from the remaining elements of the complete basis.

#### D. Phase-Shifting Holography Measurements

For the measurements using phase-shifting digital holography, we used an interferometer in a Mach–Zehnder configuration. A collimated laser (Oxxius slim-532) was divided by a beam splitter into the object and the reference beam. The first one illuminated the object while the second traveled directly towards the camera (Allied Stingray F-145). The object, a layer of photoresist spin coated onto a transparent plate, was imaged onto the camera by an optical system in a $4f$ configuration. The phase shifts were generated by shifting a grating codified onto a DMD (DLP Discovery 4100) and filtering the first diffraction order in the Fourier plane of a $4f$ optical configuration located before the object plane [77]. The camera recorded four interferograms with a phase shift interval equal to $\pi /2$ and a standard phase-shifting algebraic operation [78] was used to measure the amplitude and phase at the object plane.

## Funding

Generalitat Valenciana (PROMETEO/2016/079); Ministerio de Economía y Competitividad (MINECO) (FIS2015-72872-EXP, FIS2016-75618-R); Universitat Jaume I (UJI) (P1-1B2015-35, PREDOC/2013/32).

## Acknowledgment

We thank Salvador Bará for the photoresist plates simulating optical aberrations used in our experiments. Author contributions: F. S. and E. T. conceived the idea; F. S. performed the experiments with the aid of P. C. and V. D.; F. S. and V. D. wrote the paper; all the authors discussed the results and revised the paper; J. L. and E. T. coordinated the project.

See Supplement 1 for supporting content.

## REFERENCES

**1. **F. Zernike, “How I discovered phase contrast,” Science **121**, 345–349 (1955). [CrossRef]

**2. **G. Popescu, “The power of imaging with phase, not power,” Phys. Today **70**(5), 34–40 (2017). [CrossRef]

**3. **J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann–Shack wave-front sensor,” J. Opt. Soc. Am. A **11**, 1949–1957 (1994). [CrossRef]

**4. **P. Artal, “Optics of the eye and its impact in vision: a tutorial,” Adv. Opt. Photon. **6**, 340–367 (2014). [CrossRef]

**5. **G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photon. **4**, 472–553 (2012). [CrossRef]

**6. **D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. A **197**, 454–487 (1949). [CrossRef]

**7. **M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in *Progress in Optics* (Elsevier, 2012), Vol. 57, pp. 133–217.

**8. **R. Tyson, *Principles of Adaptive Optics*, Series in Optics and Optoelectronics, 3rd ed. (CRC Press, 2010), Vol. 20102628.

**9. **M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. **3**, e165 (2014). [CrossRef]

**10. **N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods **14**, 374–380 (2017). [CrossRef]

**11. **K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods **11**, 625–628 (2014). [CrossRef]

**12. **A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Opt. Express **21**, 10850–10866 (2013). [CrossRef]

**13. **Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. **32**, 87–109 (2015). [CrossRef]

**14. **G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics **7**, 739–745 (2013). [CrossRef]

**15. **H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics **4**, 833–839 (2010). [CrossRef]

**16. **R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. **39**, 6466–6469 (2014). [CrossRef]

**17. **N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. **49**, 6–10 (1984). [CrossRef]

**18. **R. Horisaki, R. Egami, and J. Tanida, “Single-shot phase imaging with randomized light (SPIRaL),” Opt. Express **24**, 3765–3773 (2016). [CrossRef]

**19. **L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express **5**, 2376–2389 (2014). [CrossRef]

**20. **L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica **2**, 104–111 (2015). [CrossRef]

**21. **J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA **113**, 6605–6610 (2016). [CrossRef]

**22. **P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. **38**, 5204–5207 (2013). [CrossRef]

**23. **B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. **17**, S573–S577 (2001).

**24. **B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. **5**, 3275 (2014). [CrossRef]

**25. **R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. **337**, 103–108 (2002). [CrossRef]

**26. **D. Dayton, J. Gonglewski, B. Pierson, and B. Spielbusch, “Atmospheric structure function measurements with a Shack–Hartmann wave-front sensor,” Opt. Lett. **17**, 1737–1739 (1992). [CrossRef]

**27. **X. Cui, J. Ren, G. J. Tearney, and C. Yang, “Wavefront image sensor chip,” Opt. Express **18**, 16685–16701 (2010). [CrossRef]

**28. **L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. **216**, 313–319 (2003). [CrossRef]

**29. **Y. Saita, H. Shinto, and T. Nomura, “Holographic Shack–Hartmann wavefront sensor based on the correlation peak displacement detection method for wavefront sensing with large dynamic range,” Optica **2**, 411–415 (2015). [CrossRef]

**30. **S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. **371**, 323–336 (2006). [CrossRef]

**31. **J. Pfund, N. Lindlein, and J. Schwider, “Dynamic range expansion of a Shack–Hartmann sensor by use of a modified unwrapping algorithm,” Opt. Lett. **23**, 995–997 (1998). [CrossRef]

**32. **S. Wang, P. Yang, B. Xu, L. Dong, and M. Ao, “Shack–Hartmann wavefront sensing based on binary-aberration-mode filtering,” Opt. Express **23**, 5052–5063 (2015). [CrossRef]

**33. **R. Martínez-Cuenca, V. Durán, V. Climent, E. Tajahuerce, S. Bará, J. Ares, J. Arines, M. Martínez-Corral, and J. Lancis, “Reconfigurable Shack–Hartmann sensor without moving elements,” Opt. Lett. **35**, 1338–1340 (2010). [CrossRef]

**34. **T. Godin, M. Fromager, E. Cagniot, M. Brunel, and K. Aït-Ameur, “Reconstruction-free sensitive wavefront sensor based on continuous position sensitive detectors,” Appl. Opt. **52**, 8310–8317 (2013). [CrossRef]

**35. **R. Navarro and E. Moreno-Barriuso, “Laser ray-tracing method for optical testing,” Opt. Lett. **24**, 951–953 (1999). [CrossRef]

**36. **S. Olivier, V. Laude, and J. Huignard, “Liquid-crystal Hartmann wave-front scanner,” Appl. Opt. **39**, 3838–3846 (2000). [CrossRef]

**37. **J.-C. Chanteloup, “Multiple-wave lateral shearing interferometry for wave-front sensing,” Appl. Opt. **44**, 1559–1571 (2005). [CrossRef]

**38. **“Phasics,” http://phasicscorp.com.

**39. **I. Iglesias, “Pyramid phase microscopy,” Opt. Lett. **36**, 3636–3638 (2011). [CrossRef]

**40. **A. B. Parthasarathy, K. K. Chu, T. N. Ford, and J. Mertz, “Quantitative phase imaging using a partitioned detection aperture,” Opt. Lett. **37**, 4062–4064 (2012). [CrossRef]

**41. **I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt. **18**, 026015 (2013). [CrossRef]

**42. **H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express **24**, 25345–25361 (2016). [CrossRef]

**43. **E. J. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. **25**, 21–30 (2008). [CrossRef]

**44. **M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**, 83–91 (2008). [CrossRef]

**45. **H. Gong, O. Soloviev, D. Wilding, P. Pozzi, M. Verhaegen, and G. Vdovin, “Holographic imaging with a Shack–Hartmann wavefront sensor,” Opt. Express **24**, 13729–13737 (2016). [CrossRef]

**46. **F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A **75**, 043805 (2007). [CrossRef]

**47. **P. Gao, G. Pedrini, C. Zuo, and W. Osten, “Phase retrieval using spatially modulated illumination,” Opt. Lett. **39**, 3615–3618 (2014). [CrossRef]

**48. **S. Chowdhury and J. Izatt, “Structured illumination quantitative phase microscopy for enhanced resolution amplitude and phase imaging,” Biomed. Opt. Express **4**, 1795–1805 (2013). [CrossRef]

**49. **S. Chowdhury and J. Izatt, “Structured illumination diffraction phase microscopy for broadband, subdiffraction resolution, quantitative phase imaging,” Opt. Lett. **39**, 1015–1018 (2014). [CrossRef]

**50. **G. Vdovin, H. Gong, O. Soloviev, P. Pozzi, and M. Verhaegen, “Lensless coherent imaging by sampling of the optical field with digital micromirror device,” J. Opt. **17**, 122001 (2015). [CrossRef]

**51. **W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. **70**, 998–1006 (1980). [CrossRef]

**52. **R. Kasztelanic, A. Filipkowski, D. Pysz, R. Stepien, A. J. Waddie, M. R. Taghizadeh, and R. Buczynski, “High resolution Shack–Hartmann sensor based on array of nanostructured GRIN lenses,” Opt. Express **25**, 1680–1691 (2017). [CrossRef]

**53. **M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photon. **7**, 241–275 (2015). [CrossRef]

**54. **C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics **8**, 605–609 (2014). [CrossRef]

**55. **P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. **38**, 2524–2527 (2013). [CrossRef]

**56. **L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express **25**, 4975–4984 (2017). [CrossRef]

**57. **P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods (2017), in press. [CrossRef]

**58. **F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B **113**, 551–558 (2013). [CrossRef]

**59. **F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. **6**, 29181 (2016). [CrossRef]

**60. **E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express **22**, 16945–16955 (2014). [CrossRef]

**61. **V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express **23**, 14424–14433 (2015). [CrossRef]

**62. **P. R. Griffiths, H. J. Sloane, and R. W. Hannah, “Interferometers vs monochromators: separating the optical and digital advantages,” Appl. Spectrosc. **31**, 485–495 (1977). [CrossRef]

**63. **V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA **109**, E1679–E1687 (2012). [CrossRef]

**64. **M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA **110**, 13138–13143 (2013). [CrossRef]

**65. **W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE **57**, 58–68 (1969). [CrossRef]

**66. **E. Candès and J. Romberg, “l1-magic: recovery of sparse signals via convex programming,” https://www.cs.bham.ac.uk/~axk/Sakinah/inspiring_readings/l1magic.pdf.

**67. **B. Lochocki, A. Gambín, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica **3**, 1056–1059 (2016). [CrossRef]

**68. **Y. Wu, P. Ye, I. O. Mirza, G. R. Arce, and D. W. Prather, “Experimental demonstration of an optical-sectioning compressive sensing microscope (CSM),” Opt. Express **18**, 24565–24578 (2010). [CrossRef]

**69. **H. Gong, T. E. Agbana, P. Pozzi, O. Soloviev, M. Verhaegen, and G. Vdovin, “Optical path difference microscopy with a Shack–Hartmann wavefront sensor,” Opt. Lett. **42**, 2122–2125 (2017). [CrossRef]

**70. **P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. **6**, 5913 (2015). [CrossRef]

**71. **Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express **19**, 1016–1026 (2011). [CrossRef]

**72. **E. Candès, “Compressive sampling,” in *Proceedings of the International Congress of Mathematicians* (European Mathematical Society, 2006), pp. 1433–1452.

**73. **F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. **5**, 14300 (2015). [CrossRef]

**74. **H. Dai, G. Gu, W. He, L. Ye, T. Mao, and Q. Chen, “Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation,” Opt. Express **24**, 26080–26096 (2016). [CrossRef]

**75. **N. Radwell, K. J. Mitchell, G. GIbson, M. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica **1**, 285–289 (2014). [CrossRef]

**76. **J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. **25**, 14–20 (2008). [CrossRef]

**77. **D. B. Conkey, A. M. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium characterization with application to focusing light through turbid media,” Opt. Express **20**, 1733–1740 (2012). [CrossRef]

**78. **I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. **22**, 1268–1270 (1997). [CrossRef]