Abstract

Phase-imaging techniques extract the optical path length information of a scene, whereas wavefront sensors provide the shape of an optical wavefront. Since these two applications have different technical requirements, they have developed their own specific technologies. Here we show how to perform phase imaging combining wavefront sampling using a reconfigurable spatial light modulator with a beam position detector. The result is a time-multiplexed detection scheme, capable of being shortened considerably by compressive sensing. This robust referenceless method does not require the phase-unwrapping algorithms demanded by conventional interferometry, and its lenslet-free nature removes trade-offs usually found in Shack–Hartmann sensors.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Even though the physical nature of light has been fully understood for more than a century, there are still no available detectors capable of directly imaging both the amplitude and phase of a wavefront [Fig. 1(a)]. Information about those two quantities is of capital interest when trying to perform biomedical imaging [1,2], aberration measurement and correction in visual optics [3,4], and three-dimensional imaging [5], among other applications. The limitation in performing optical measurements ultimately arises from the extremely fast oscillations of the optical fields, which current electronics are unable to resolve. As detectors only capture light irradiance, several approaches have been proposed over time to tackle the phase problem. Historically, Gabor suggested in 1949 the first quantitative technique, which used interferometric information to recover the complex optical field [6], establishing the basis of modern holography and paving the way for applications such as quantitative phase microscopy [7]. In parallel, phase information plays a fundamental role in adaptive optics, a technique that intends to measure and correct optical aberrations in real time [8]. Although adaptive optics was initially conceived for circumventing atmospheric turbulences in astronomy, its simple operation principle has found applications in other areas, such as visual optics [3,4], microscopy [9,10], and biomedical imaging [11,12].

 figure: Fig. 1.

Fig. 1. Complex amplitude retrieval using spatial wavefront sampling compared to current imaging and wavefront sensing techniques. (a) In a conventional imaging system, the phase information carried by the light coming from an object (for example, a biological sample) is completely lost, as the object image is formed onto a detector that simply measures the light irradiance. (b) Digital holography captures the interference between the light coming from the object and a reference beam, allowing one to retrieve the complex amplitude of the object from irradiance measurements. (c) Shack–Hartmann sensors measure the displacements of different foci generated by an array of lenslets. From those displacements, the wavefront impinging onto the lenslet plane can be reconstructed, providing information about the phase variations introduced by the object. (d) Our technique uses a sequence of illumination amplitude patterns and a single focusing lens. The object’s complex amplitude can be obtained from the statistical properties of the irradiance distribution measured in the Fourier plane of the lens. The same operation principle can be applied if, instead of the configuration shown in the figure, the light coming from an object is modulated by a set of patterns generated on a spatial light modulator (SLM).

Download Full Size | PPT Slide | PDF

Two main groups of techniques have emerged to tackle the problem of recovering the complex amplitude of the light field. The first one includes interferometric approaches, which measure the interference between light coming from the object and a reference beam [Fig. 1(b)]. Although they are extremely powerful for conducting precise phase measurements, their high sensitivity to environmental perturbations (such as mechanical vibrations and changes in temperature) and the need of a reference beam (not always attainable) hinder the implementation of portable and compact interferometric imaging systems in many applications. As an alternative, a second group of techniques has emerged, whose objective is to recover the same information without the need of a reference beam. These non-interferometric approaches rely on several assumptions about the object beam and use mathematical algorithms to infer the wavefront information [13]. Examples of these kinds of techniques are Fourier ptychography [14], coherent diffractive imaging [15,16], phase imaging based on the transport-of-intensity equation [17], and phase imaging with randomized illumination [18]. By eliminating the need for a reference beam, simpler and more robust devices can be designed. Furthermore, computational approaches also enable us to extend the physical capabilities of imaging systems, providing increased field of view [19], optical sectioning [20], or superresolution [21,22]. However, the recovery algorithms used in those techniques usually entail high post-processing times or multiple data acquisitions to recover one image.

Among the non-interferometric techniques, Shack–Hartmann (SH) wavefront sensors [23] are currently the most frequently employed for measuring optical aberrations. SH wavefront sensors combine a lenslet array with a pixelated detector, like a charge-coupled device (CCD) or a CMOS camera. By placing the detector at the focal plane of the lenslet array, the positions of the foci generated by each lenslet can be linked to the phase of the wavefront in each region [Fig. 1(c)]. In other words, each lenslet samples the phase information of a region of the wavefront. This method provides a very compact system that is easy to use and align and that requires no complex post-processing stages. For these reasons, SH sensors are used in manifold scientific fields [4,2427]. However, the number of lenslets, their focal length, and their diameter limit the spatial resolution, dynamic range, and sensibility of SH wavefront sensing. Although multiple solutions have been proposed to manage the several trade-offs between the above magnitudes [2836], manufacturing processes still place a boundary on the size and curvature of available lenslets, constraining the attainable spatial resolution of commercial SH sensors. Quadriwave lateral shearing interferometry [37], also known as a modified Hartmann mask technique, makes it possible to increase the SH spatial resolution by a factor four [38]. Alternatively, phase imaging can be performed using pyramid wavefront sensors, which employ a four-sided refractive pyramid to split the Fourier plane into four parts, each one generating a phase gradient image on a pixelated detector [39]. Further developments replace the pyramid by a quadri-foil lens [40] or a liquid crystal display [41]. Despite the benefits of this approach in terms of spatial resolution, the illumination numerical aperture determines the detectable dynamic range, so in practice only relatively smooth phase gradients can be precisely recorded [42].

Here, we perform phase imaging using an alternative non-interferometric approach to measure the complex amplitude of a wavefront. We overcome the inherent limitations in spatial resolution, optical efficiency, and dynamic range that are found in SH wavefront sensing. We sample the wavefront by using a set of binary amplitude masks generated by a high-speed SLM. A single focusing lens forms a time-dependent light distribution on its focal plane, where an irradiance detector is placed. Measuring the changes of the total irradiance and the centroid position of that distribution, both the amplitude and phase of the wavefront can be recovered [Fig. 1(d)]. One advantage of this time-multiplexed detection scheme is that it can be implemented using compressive sensing (CS), allowing us to go beyond the Shannon–Nyquist condition, with the subsequent reduction in acquisition time [43,44]. Unlike previous phase-retrieval methods based on SH wavefront sensors or wavefront modulation [4549], our approach is lenslet-free and relies neither on any kind of iterative algorithm nor interferometric measurements. Moreover, the use of phase-unwrapping algorithms is not required, such as those employed in some interferometric techniques due to their limited phase unambiguity range [50]. As a proof of concept, we perform aberration sensing as well as phase imaging of a low-contrast sample with variable optical thickness. The corresponding results are compared, respectively, with Shack–Hartmann wavefront sensing and phase-shifting interferometry.

2. OPERATION PRINCIPLE

The basic idea of our technique is to use the relationship between the amplitude and phase of a wavefront and the statistical moments of its Fourier irradiance distribution. This idea is also at the heart of current SH sensors, as they sample a wavefront by means of an array of lenslets and measure the centroid position (the first-order statistical moment) of the Fourier distribution created by each lenslet, allowing one to derive the local slope of the wavefront. By placing a pixelated detector (typically a CCD) at the focal plane of the lenslet array, SH sensors fully map the local slopes of the wavefront. After that measurement, numerical integration provides the phase of the wavefront at each lenslet position [51] [see Fig. 2(a)].

 figure: Fig. 2.

Fig. 2. Operation principle of different wavefront sensing approaches. (a) Shack–Hartmann wavefront sensing. A wavefront coming from an object passes through an array of lenslets, which produce a distribution of focal spots on the detector. The position of each spot is linked to the local slope of the wavefront on every lenslet. Numerical integration of the slope data provides the phase of the wavefront. (b) Raster scanning wavefront sensing. A small amplitude aperture is moved over the wavefront plane and, for each consecutive scanning position, the Fourier distribution of the emerging light is recorded. The total irradiance of that distribution at each aperture position can be used to recover the amplitude image of the object. Additionally, the centroid relative location of each Fourier distribution provides the local slope of the wavefront at each scanning position, which allows one to obtain the phase image of the object. (c) Spatial wavefront sampling. Instead of using a small scanning aperture, the wavefront coming from the object is sampled by a set of amplitude masks (reconstruction basis). Now, from the irradiance of each Fourier distribution generated on the detector plane, one can measure the mathematical projection of the object amplitude into the reconstruction basis. By demultiplexing that information, the object amplitude is spatially resolved. In the same way, demultiplexing the data concerning the centroid locations provides the slope map of the wavefront and then, after numerical integration, the object phase image.

Download Full Size | PPT Slide | PDF

Several tradeoffs arise from the SH configuration. First, the spatial resolution of the measurement is determined by the total number of lenslets and their size. In order to increase the spatial resolution, arrays with higher density of lenslets can be built. However, reducing the size of each lenslet also decreases the amount of light at each region of the detector. If the signal-to-noise ratio of the measurement is low, the accuracy of the centroid position detection is greatly reduced, which hinders a reliable wavefront reconstruction. Even if the detector can work in low-light-level scenarios, building large arrays of lenslets with a diameter below 100 μm and an accurate curvature is technologically challenging [52]. Moreover, centroid displacement and wavefront slope are related via the lenslet focal length, so, for a given slope, the higher the focal length is, the larger the centroid displacements that are measured. This brings about a second trade-off, since a strongly aberrated wavefront may produce centroid displacements larger than the size of the detection area allocated to every lenslet on the sensor. The result is crosstalk between nearby spots that produces errors in the reconstructed phase, limiting the attainable dynamic range of the sensor. This problem can be circumvented (without sacrificing sensitivity) by a number of techniques that track the real spot location or infer it by using computational techniques [29]. Instead of trying to expand the dynamic range of a sensor by increasing its hardware and/or software complexity, an alternative approach is to implement a reconfigurable (adaptive) SH sensor that includes an array of diffractive lenslets programmed onto a SLM [28,33]. The lenslet characteristics can be then chosen to match the requirements of a specific application optimally, albeit the tradeoff between sensitivity and dynamic range still remains. Another aspect of SH sensors that is usually not considered is the fact that the total amount of light arriving at each detector region provides a measurement of the light power at the corresponding lenslet position. Using that information, one can map the irradiance of the light coming from an object. However, due to the relatively poor spatial resolution of SH sensors (usually around a thousand lenslets), they barely can compete with interferometric systems to measure the complex amplitude of an object with high spatial frequency content.

Another approach for measuring the local slope of a wavefront is the use of a small scanning aperture and a single lens rather than an array of lenslets [Fig. 2(b)]. For each position of the aperture, only light from a region of the wavefront will be focused by the lens. This light will generate a focal spot on the Fourier plane of the lens where the detector is placed. For a small enough aperture, the wavefront can be locally approximated by a plane wave, and it can be demonstrated (see Methods) that the position of the centroid of the distribution is related to the gradient of the phase inside the aperture by the equation

Δ=(Δx,Δy)=λf2πϕ,
where =(/xi^,/yj^) represents the gradient operator in two dimensions, λ is the light wavelength, and f is the focal length of the focusing lens. One can see the combination of this small aperture and the single lens as one of the lenslets of a SH array. By scanning the wavefront with the aperture, the phase can be easily recovered by numerical integration. Furthermore, calculating the zero-order moment of the light distribution (i.e., its total irradiance) provides a measurement of the wavefront amplitude in the position of the aperture, leading to the recovery of an amplitude image. Although this approach would solve the spatial resolution problem of SH sensors, given that it is easier to generate small apertures than small lenslets, it still presents a trade-off between the spatial resolution and the amount of light at each area of the detector plane, unless the scanning is performed with a laser combined with a galvanometric mirror.

Our technique uses 2D structured illumination patterns as an alternative to a small aperture. As can be seen in Fig. 2(c), instead of just selecting a small region of the wavefront, a time-variable mask (a sequence of programmed patterns) simultaneously selects several regions of the wavefront. This procedure resembles those used in structured illumination microscopy or in single-pixel imaging [44,53,54]. Different single-pixel approaches have been recently proposed to obtain phase information using two-beam [55,56], and common path [57] interferometry. However, our approach is non-interferometric and uses a simpler geometric formulation similar to the one found in Shack–Hartmann sensors. Also, it can be extended to work with partially coherent light, and the detector in use makes it relatively easy to operate in different regions, such as infrared or terahertz (THz). It must be noted that, as in single-pixel imaging, the technique presented here can work in two different configurations: the object (or wavefront) under study can be imaged onto the SLM, or the codified patterns can be projected onto the object plane [5861]. The key factor that has to be accomplished is that both the mask and the object wavefront overlap in one plane. Even though the irradiance distribution generated by the focusing lens results from the contribution of light coming from different areas of the sampling plane, the amplitude and phase information can be still retrieved, provided that the sequence of sampling patterns is known (see Methods). Now, the amount of light arriving at the detector for every pattern has been greatly increased in comparison to the scanning-aperture scheme. This fact is commonly known as the multiplex or Fellgett’s advantage, and has been extensively exploited to increase the signal-to-noise ratio of optical measurements over the last decades [54,6264]. In our proposal, the set of masks used are a shifted and rescaled version of the Walsh–Hadamard functions [65], since this election provides several benefits. First, Walsh–Hadamard functions are binary, so they are a good choice when working with binary amplitude modulators, as is the case of digital micromirror devices (DMDs), which reach very high refresh rates (around 22 kHz). Second, the sampling patterns have the same number of absorbing and transmitting (or reflecting) areas, no matter the size of those areas. This fact is important, because once the window size on the modulator has been fixed, increasing or decreasing the spatial resolution of the masks does not change the total amount of light transmitted or reflected by the device. Then, irrespective of the chosen spatial resolution, each measurement always works with the same number of incident photons, unlike the wavefront sensing approaches described above. Furthermore, since in every measurement only a single spatial light distribution is detected, the crosstalk errors that limit the dynamic range in a SH sensor are not present here. However, there is also a drawback. When operating with Hadamard functions, the number of photons used per measurement decreases by a factor two, since only half of the DMD mirrors are used to illuminate the sample. Nevertheless, this approach makes it possible to use compressive sensing measurements, considerably increasing the acquisition speed.

In order to recover the amplitude and phase information of a wavefront, our method requires the measurement of the zero- and first-order statistical moments of the spatial distribution of light, i.e., its total irradiance and the position of its centroid. The classical approach to measure those quantities is to place a pixelated sensor in the Fourier plane of the lens to acquire a digital image, and then calculate them computationally. However, other experimental approaches can be explored. Here we have chosen a lateral position detector instead of a camera. The benefit of using this detector lies in its capability to provide the information about the power of the beam and the position of its centroid at speeds in the order of several kilohertz (kHz) without the need of computational procedures. Despite digital cameras offering better sensitivity and accuracy in the detection of the centroid, we have opted for a simpler design to demonstrate the feasibility of our approach.

3. RESULTS

A. Experimental Verification

To demonstrate our method, we present the experimental device shown in Fig. 3(a). Light coming from a laser (Oxxius slim-532) emitting at 532 nm is collimated with a lens (L1) and impinges onto a DMD (DLP Discovery 4100 V-7000 from ViALUX). By using a 4f system formed by lenses L2 and L3, light is projected onto an object, which modifies the wavefront phase. After going through it, light passes through a condensing lens (CL), with a focal length of 150 mm. In its Fourier plane, the lateral position detector (Thorlabs PDP90A) measures the irradiance of the beam and the position of its centroid. To retrieve the amplitude and phase information of the object, a full set of Hadamard patterns is projected. The size of this set depends on the pixel size of the image one wants to obtain. Then, for an N×N image, N2 patterns must be sent (see Methods). This sequential acquisition represents a limitation of our technique. Whereas a SH sensor captures all the information in a single shot, our technique relies on the sequential sampling of the wavefront to work.

 figure: Fig. 3.

Fig. 3. Experimental verification of the proposed technique. (a) Schematics of the system. Captions: LS, laser source; L1, L2, and L3, lenses; OBJ, object; CL, condensing lens; PD, position detector. (b) Image of the detector used in our experiments. It includes four electrodes connected to a metallic surface, whose voltage measurements provide both the irradiance of the light beam and the position of its centroid. (c) Experimental results for a plate simulating a coma aberration. As the sample is transparent, the amplitude image provides no information about the object. In the phase image, the object information is clearly recovered. (d) 3D view of the recovered phase.

Download Full Size | PPT Slide | PDF

As a first example, we show the results for a photoresist plate representing a typical coma aberration [Figs. 3(c) and 3(d)]. In this case, the spatial resolution was set to 64×64  pixels with a pixel pitch of 82.08 μm. For this size, the total acquisition time was roughly 1 s. This is not caused by the refresh rate of our DMD (that would need roughly 200 ms to project 642 patterns), but by the limited bandwidth of our detector [Fig. 3(b)]. Faster lateral position sensing detectors, which are commercially available, would speed up the acquisition process. A comparison of this result with those obtained by other wavefront sensing techniques (using a SH sensor and through an interferometric measurement) can be found in Supplement 1.

B. Comparison with Shack–Hartmann Wavefront Sensing

To test the capabilities of the proposed technique, we have compared it with a commercial SH wavefront sensor (Thorlabs WFS150-5C) in several scenarios. As a first object, we employ a spherical lens [Fig. 4(a)] with a focal length of 500 mm aligned with the optical axis. This simple choice facilitates the comparison of both methods quantitatively, since we expect to recover the phase of a spherical wavefront. The object plane is imaged onto the lenslet array plane of the SH wavefront sensor by using a 4f system. The focal spots produced by the lenslet array can be seen in Fig. 4(b). Using the displacements of each focal spot with respect to a reference position (previously determined by a calibration measurement), the 3D view of the wavefront can be recovered by direct integration. To ease the visualization, we also present a wrapped view of the wavefront phase [Fig. 4(c)].

 figure: Fig. 4.

Fig. 4. Reconstruction of a spherical wavefront and comparison with a commercial Shack–Hartmann sensor. (a) Caption of the lens used as a phase object. (b) Spot map generated by the SH lenslet array. By comparing the position of each spot to a reference value, the spot displacements can be calculated. From those data, a 3D view of the phase can be recovered. (c) Wrapped phase of the lens. (d). Small region of the full displacement map for our system. Those displacements can be related to the gradient of the phase. Numerical integration of those gradients provides the phase of the object, and after decomposition in the Zernike basis, a high-resolution 3D view of the phase can be displayed (plot on the right). (e) Wrapped phase of the lens obtained using the proposed technique.

Download Full Size | PPT Slide | PDF

Using the optical setup shown in Fig. 3(a), we reconstruct the wavefront phase by using spatial wavefront sampling. A small region of the full measured 64×64 map of centroid displacements is shown in Fig. 4(d). The modulo-2π phase reconstructed from those displacements can be observed in Fig. 4(e). The total phase change over the aperture is 9.67λ, a value that differs in 0.06λ from that obtained using the SH sensor.

As a second experiment, we move the lens away from the optical axis, thus producing a larger phase gradient. Given the technical specifications of our SH sensor, the maximum measurable total phase change over the aperture of the sensor is around 100λ. When the phase gradient introduced by the off-axis lens produces a higher phase variation, the focal spots move so much in one direction that crosstalk appears and the sensor is not able to provide reliable results.

In the top row of Fig. 5, we show the result obtained by using the SH sensor at the limit of its dynamic range (here we show a wrapped view of the phase to ease visualization). This limit can be considerably overcome using the setup shown in Fig. 3(a). As can be seen in the bottom row of Fig. 5, spatial wavefront sampling allows us to measure a total gradient phase of 217λ. The above improvement in the dynamic range is due to the relative increase in the size of the detector area attained by our technique. In a SH sensor, the detector is divided into a number of regions equal to the number of lenslets in the array. Each region, containing a small number of pixels, is used to map the position of each focal spot and sets a limit to the maximum spot displacement. As was mentioned before, walking off beyond that limit causes several focal spots to share the same region of the detector, leading to significant errors in the reconstructed phase. Since in our technique we sequentially detect the light distribution as a whole, the full size of the sensor can be used in every consecutive measurement. As a result, the crosstalk penalty typically found in a SH sensor is not present anymore. In the same way as in the design stage of SH sensors, fine tuning of the focal length of the condenser lens used in our setup, as well as a proper selection of the size of the detector, will provide bigger or smaller dynamic ranges and sensitivities in the phase detection.

 figure: Fig. 5.

Fig. 5. High dynamic-range measurement and comparison with a commercial SH sensor. In the top row, we show the wrapped phase measured with the aid of a SH sensor when the lens shown in Fig. 4(a) is placed in an off-axis position. The maximum phase gradient measurable by the commercial sensor is 100λ. A zoomed region, marked in red, is shown in the right part of the figure. In the bottom row, we show the results for the same object obtained with the system shown in Fig. 3(a). In this case, the lens has been displaced a bigger distance from the optical axis. The phase gradient measured is 217λ. A zoomed region, marked in blue, is shown in the right part of the figure. Given the small size of both figures, some Moiré artifacts appear in the images on the left that are not part of the fringe pattern of a spherical phase gradient (see Supplement 1).

Download Full Size | PPT Slide | PDF

C. Use of Compressive Sensing

Given the similarities between our technique and single-pixel imaging, some advanced undersampling approaches can be used to improve both the acquisition speed and the light efficiency of our method.

CS provides a recovery framework for signals sampled below the limit imposed by the Shannon–Nyquist theorem [43]. For a sequential signal sampling, this implies a reduction in measurement time. Once the signal has been sampled, an optimization algorithm provides an estimation of the original signal. This estimation is usually based on sparsity constraints and provides a high-fidelity reconstruction (see Methods).

In our experiments, we have used standard CS algorithms (l1- magic package [66]) to recover the coma aberration presented in Fig. 3. The results are shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Evolution of reconstruction quality when CS is used for wavefront recovery. For different values of the compression ratio (k/N2, k being the number of projected patterns), we present the correlation coefficient between the CS estimation and the recovery without using undersampling. Here, the images have a resolution of 64×64  pixels, so N2=4096. The points represent the median of the correlation coefficient for 100 different realizations of the algorithm (each realization corresponds to a different subset of random measurements).

Download Full Size | PPT Slide | PDF

As can be seen, for relatively smooth aberrations, estimations with fidelity higher than 90% can be recovered by using 10% of the total number of measurements established by the Shannon–Nyquist criterion. Figure 6 includes two insets that show individual reconstructions for compression ratios corresponding to correlation coefficients over 0.9 and 0.99, respectively (points A and B in the fidelity plot). As in our commercial Shack–Hartmann sensor, the recovered wavefronts are expressed in the Zernike basis, and high-resolution images are generated from that expansion (see Supplement 1 for details). By taking this into account, high-resolution images with acquisition times of 20 ms could be achieved operating at the full frame rate of the DMD (20  kHz), allowing us to perform real-time aberrometry. This could be very useful in ophthalmic scenarios, where the patient needs to stand still while the eye aberrations are measured. Also, using patterned illumination could be beneficial in biological scenarios, where photodamage thresholds can limit the amount of light that can be used at each region of the sample when working on a reflection configuration [67,68]. By using this CS-based procedure, one would be able to increase the measured signal level without causing photodamage.

D. Complex Amplitude Retrieval and Comparison with Phase-Shifting Interferometry

Our technique offers the possibility of reconstructing not only the phase distribution of a wavefront coming from an object but also its amplitude. For the set of illumination patterns, the light power measured by the detector provides the projections of the wavefront amplitude into the basis of Hadamard functions. By measuring all the successive projections, the object amplitude image can be recovered offline, following the operation principle of single-pixel imaging [44] (see Methods). A simple example with an object composed of an amplitude mask attached to the lens used in the previous section can be seen in Supplement 1.

In principle, the complex amplitude of a wavefront can be retrieved by means of a SH sensor, but with a relatively poor spatial resolution (given by the lenslet diameter and the lens fill factor). As a consequence, spatial features smaller than a few hundreds of micrometers are lost in the recovery process. Even though high-resolution SH sensors are being developed and can be used to perform phase imaging, the trade-offs between dynamic range and sensitivity are still present [69]. In our technique, the limit in the spatial resolution is ultimately given by the modulator pixel size (typically 10  μm) and the magnification of the projecting system. In our setup, this supposes an increment in the resolution of one order of magnitude with respect to a typical SH sensor.

To demonstrate this improvement, we use as a sample a thin layer of a photoresist material. We placed the photoresist over a transparent plate, creating an object with different regions with and without material, as can be seen in Fig. 7(a). Due to the difference of refractive index between the photoresist and the air, an impinging wavefront acquires a spatial phase distribution when it passes through the object. By using the setup shown in Fig. 3(a), the amplitude and phase information of the sample are recovered [Fig. 7(b)]. A 3D visualization of the phase information is shown in Fig. 7(c). The images have a spatial resolution of 128×128  pixels with a pixel pitch of 41.04 μm. Since both the photoresist and the transparent plate present a similar absorption in the green region of the spectra, the amplitude image presents low quality. However, fine details of the sample are recovered in the phase image, such as the small holes (with a diameter of around 80 μm) present in the photoresist stripes produced by air bubbles in the manufacturing process. These small spatial features cannot be retrieved by our SH sensor, as the diameter of each lenslet is 150 μm. Additionally, due to the low number of lenslets (39×31=1209), the SH sensor would only provide an image clearly “pixelated.” An additional recovery of the same object, using CS techniques, can be seen in Supplement 1.

 figure: Fig. 7.

Fig. 7. Comparison with phase-shifting interferometry. (a) Photograph of the photoresist layer used for the experiment. The black square represents the region that will be imaged, consisting of zones with and without photoresist material. (b) Amplitude and phase images obtained with the proposed technique. Due to the absorption properties of the object, the amplitude image presents poor quality. However, in the phase image, fine details of the sample can be observed. (c) 3D representation of the obtained phase. (d) Phase image obtained with phase-shifting interferometry for the same region of the object. In the inset, we show the region between the two small holes present in one of the photoresist bars. (e) Physical profile of the photoresist strip between the two holes obtained with an optical profilometer (Sensofar Plμ 2300).

Download Full Size | PPT Slide | PDF

In order to check the validity of our results, we resort to phase-shifting interferometry. This technique uses a pixelated detector, which makes it possible to exploit the high spatial resolution of a commercial CCD (in our experiment, the pixel size is 6.5 μm). The recovered phase is shown in Fig. 7(d). Unlike our wavefront sensing technique, phase-shifting interferometry only provides modulo-2π mappings [see the vertical scale in Fig. 6(d)], requiring the use of unwrapping algorithms to reconstruct a continuous phase like the one shown in Fig. 7(b). We focus our attention on the small region between the two air bubbles in one of the photoresist stripes. After zooming in, it can be seen that the phase difference between the top of the strip and the substrate is 22.7 radians. In our measurements with structured illumination, this difference is 23 radians, showing a very good agreement with the previous value. We also performed a thickness measurement of the region of interest with a mechanical profilometer (Dektak 6, Veeco). By using the strip height provided by the profilometer (3.8 μm), the refractive index of the photoresist layer can be estimated from the optical path length, (L=λΔϕ/2πΔn). In both phase-shifting holography and our technique, the index estimation is 1.51, which is in good agreement with the nominal value of the photoresist material.

E. Discussion

We have introduced a wavefront sensor based on patterned illumination produced by an amplitude SLM. Instead of resorting to an array of lenslets, as in SH wavefront sensing, the spatial information is captured by illuminating the object with binary amplitude masks generated with a DMD. The sensor is a simple position-sensitive photodetector. This provides several benefits. First, we eliminate the spatial resolution constraints of traditional SH wavefront sensors. Second, by using the full size of the detector in each measurement, the dynamic range is greatly extended and the crosstalk problem is eliminated. Last, the use of Hadamard functions makes the method robust when changing the spatial dimension (number of pixels) to be obtained. The total number of photons used by the system will be determined only by the window size used on the DMD. This is due to the fact that Hadamard patterns always have half of their pixels on each binary value, regardless of the function dimension. Then, increasing or decreasing their spatial dimension does not necessarily change the total bright area displayed by the DMD. This is not the case when working with SH wavefront sensing, as increasing spatial resolution entails decreasing the size of the lenslets, and this reduces the number of photons arriving to each part of the sensor. This, in combination with the multiplex advantage, makes us think that the technique is well suited to work at low-light-level scenarios, where similar approaches have already been proposed to obtain amplitude information [70]. Additionally, we have exploited the high spatial resolution offered by commercial DMDs to perform phase imaging with a pixel size of tens of micrometers. To this end, we have imaged a photoresist sample and calculated its local refraction index. These results have been demonstrated to be comparable with those provided by phase-shifting interferometry, a well-established interferometric technique.

A distinctive feature of our technique is the fact that the spatial resolution is fixed by the SLM and the projecting optics, while the other optical elements included in the sensor, the focusing lens and the light detector, set the attainable sensitivity and dynamic range. By calculating several statistical moments of the light distribution at the detector plane (its irradiance and the position of its centroid), the method provides both amplitude and phase information. This enables the technique to produce results that are closer to the conventional phase-imaging approaches, but still retaining some of the benefits of the SH approach. These benefits are the referenceless nature of SH wavefront sensing, and the fact that the recovery process does not require phase-unwrapping or iterative algorithms. In addition, the easy implementation of our system also offers the possibility of being used as an add-on module of a conventional microscope [20,71], thus allowing us to carry out quantitative phase imaging with sub-micrometric spatial resolutions.

Due to the temporal multiplexing nature of the technique, there is a trade-off between spatial resolution and image acquisition time. Here, we have shown results with resolutions between 64×64 and 128×128  pixels, which take seconds to be acquired. Using faster, already available detectors will allow image capturing at video frame rates. However, the greater the resolution, the slower the acquisition will be. This drawback can be considerably alleviated using CS [72] or adaptive approaches [73,74]. The feasibility of applying CS has been shown in Section 3, where a typical ocular aberration has been reconstructed with high compression ratios. In particular, acquisitions times of around 20 ms could be achieved operating at the full frame rate of the DMD, which would open up the door to perform real-time aberrometry with our system. Last, given the technological challenge of manufacturing both arrays of lenslets and pixelated sensors that work outside the visible spectrum, the technique proposed here is a good candidate to operate in regions such as IR and THz, where similar approaches have already been used to obtain amplitude images [54,75].

4. METHODS

A. Phase Measurement from Centroid Position

Let us assume a light distribution, at position z=0, with the form: U1(x,y)=A(x,y)eiϕ(x,y). If a thin lens, with a focal length f, is placed at the position z=f, the complex amplitude distribution at the position z=2f, using Fresnel propagation, will be U2(x,y)=1jλfU˜1(xλf,yλf), where U˜1 denotes the Fourier transform of U1. The irradiance of this light distribution will be given by I2=|U2(x,y)|2=1(λf)2|U˜1(xλf,yλf)|2. Our technique is based on the calculus of several statistical moments of I2. In particular, we will use the energy, S, and the centroid position, Δ=(Δx;Δy). Those quantities will be given by

S=I2(x,y)dxdy,
Δ=(ΔxΔy)=1S(x·I2(x,y)dxdyy·I2(x,y)dxdy).

By using Parseval’s theorem, it is easy to prove that S=|U2(x,y)|2=|U1(x,y)|2, and the first measurement provides the energy of the wavefront. Using the Moment’s theorem of Fourier transformations and some algebra (see Supplement 1), it can be proved that the position of the centroid at the measurement plane and the phase of the object at the origin position are related by

Δ=(Δx;Δy)=λf2πSA2(x,y)ϕ(x,y)dxdy.

For a small square aperture with lateral size L, placed at position (a,b), the amplitude of the field can be described as

A(x,y)=K·rect(xaL)·rect(ybL),
where K is a constant. Introducing this into Eq. (4), we get
Δx=λfK22πSy=bL/2y=b+L/2x=aL/2x=a+L/2x[ϕ(x,y)]dxdy;Δy=λfK22πSy=bL/2y=b+L/2x=aL/2x=a+L/2y[ϕ(x,y)]dxdy.

If the square aperture is small enough, the gradients of the phase can be considered constant over the integration domain. Then, the centroid positions results,

Δ=(Δx;Δy)=K2λf2πL2Sa,bϕ(x,y),
where a,b=(/xi^;/yj^)|x=a,y=b represents the bidimensional gradient evaluated at point x=a,y=b. By combining Eqs. (5) and (2), it is easy to prove that S=K2L2, and thus the centroid of the distribution and the phase of the object are related by
Δ=(Δx;Δy)=λf2πa,bϕ(x,y).

It is clear that, for a small region of the wavefront, the centroid of the resulting distribution provides information about the gradient of the phase in that region. If the small square aperture is displaced, one can measure the correspondent centroid positions and then estimate the gradient map of the wavefront. After that, numerical integration provides the wavefront at the original position. If multiple square masks are used at the same time, it is also possible to relate the measurements with the phase at each one of the mask positions. Then, it is possible to understand Hadamard illumination as a particular case of this procedure, and the phase recovery can be performed while illuminating the full scene with a set of Hadamard patterns (see Supplement 1).

B. Object Recovery by Multiplexing with Hadamard Patterns

Every measurement can be mathematically described by the equation

y=Mx,
where the object to be measured is described by the object vector x, the measurements made are represented by the vector y, and the measurement process is carried out by the sensing operator, represented by the matrix M. Once the measurements have been done, one just needs to solve the algebraic problem presented by Eq. (9) to recover the object. The structure of the sensing operator will be determined by both the nature of the object under study and the system used to recover an image. For a traditional camera setup, matrix M is just the identity matrix, and each element of x is proportional to the energy of a given area of the object. By placing a detector at the image plane, the energies of each zone are measured and the object is recovered. Usually this is done with a sensor array, like a CCD camera, and all the measurements are done at the same time. However, there are scenarios, such as confocal microscopy, where this process is conducted by a bucket detector, usually a photomultiplier tube. In this case, the measurement process is sequential. It is possible to exploit Eq. (9) to increase the capabilities of an optical setup. In our approach, instead of just using the identity matrix as the sensing operator, each row of the matrix M contains a Hadamard function. Now, the measurement process consists of making the superposition between the object and each one of the Hadamard functions. After that, the energy of that superposition is measured by the detector. This process can be performed by projecting the functions onto the object with a SLM, as is done in our phase-imaging system. When using this approach, each measurement contains light coming from several regions of the object. This increases the amount of light at each measurement, and thus the signal-to-noise ratio. As Hadamard functions conform an orthonormal basis, Eq. (9) can be easily solved.

Once this way of measuring has been introduced, it is easy to see that it is possible to recover an image of different physical parameters, provided that the measurement process can be described by Eq. (9). It can be noted that our centroid measurements can be arranged in matrix form as

Δ=λf2πMϕ.

Here, Δ is the vector containing the measured positions of the centroids, and now the object is the spatial gradient distribution, expressed as a vector, ϕ (each element of this vector is a gradient for a given illumination pattern). Again, in our experiments we used the Hadamard functions as rows of the sensing matrix. For each Hadamard function projected onto the object, the energy of the distribution and the position of its centroid are measured with the position sensing detector. Then, Eq. (10) is solved and the gradient of the phase is obtained. After that, numerical integration provides the phase of the wavefront, ϕ(x,y).

C. Compressive Sensing

Solving the problem given by Eq. (9) is easy when sampling the object at the Shannon–Nyquist ratio. However, for k<N measurements, the equation system presents infinite solutions. The fundamental idea of CS is to use the fact that most of the signals found in nature are sparse in some basis of functions, i.e., they have a representation in some basis where most of the expansion coefficients are zero, so only a small number of them contain relevant information. This fact can be used to provide a solution for the above-mentioned underdetermined equation system. We can express the measurements as y=Mx=MΨα=ϕα, where we have represented the object in a given basis of functions (x=Ψα). Then, CS algorithms provide a solution to the l1-norm minimization problem,

α=argminα1such thatϕα=y.

In order to find a good solution, some premises need to be fulfilled [76]. First, the number of measurements cannot be arbitrarily low. Depending on the sparsity of the object (which depends on the chosen recovery basis), a higher or lower number of measurements is needed to obtain good results. Second, the basis pair (measurement and recovery) need to be incoherent. In practice, it is usually easy to find pairs of basis that fulfill this principle. Usually, the measurement basis is chosen for its ease of generation in a SLM, and once one has been picked up, the user selects a recovery basis that fulfills the incoherence constraint and where the object is sparse. Last, a sensing strategy needs to be defined. Initially, it was proposed that randomly choosing a subset of elements in the measurement basis was enough to obtain good results [43]. Soon after that, other strategies were defined to reduce the number of measurements maintaining the image quality. For example, it is known that natural scenes tend to have a power spectrum centered around low frequencies, so mixed sampling strategies have been used with very good results [63].

In our experiments, M represents the shifted and rescaled Hadamard basis (the elements of which are either 0 or 1), easily generated on a DMD, and Ψ is chosen to be the Haar wavelet basis, where most images tend to be sparse. The measurements have been performed following the mixed approach: fixing the number of samples, k, we chose the lowest frequency k/2 Hadamard patterns, and then we select k/2 random Hadamard patterns from the remaining elements of the complete basis.

D. Phase-Shifting Holography Measurements

For the measurements using phase-shifting digital holography, we used an interferometer in a Mach–Zehnder configuration. A collimated laser (Oxxius slim-532) was divided by a beam splitter into the object and the reference beam. The first one illuminated the object while the second traveled directly towards the camera (Allied Stingray F-145). The object, a layer of photoresist spin coated onto a transparent plate, was imaged onto the camera by an optical system in a 4f configuration. The phase shifts were generated by shifting a grating codified onto a DMD (DLP Discovery 4100) and filtering the first diffraction order in the Fourier plane of a 4f optical configuration located before the object plane [77]. The camera recorded four interferograms with a phase shift interval equal to π/2 and a standard phase-shifting algebraic operation [78] was used to measure the amplitude and phase at the object plane.

Funding

Generalitat Valenciana (PROMETEO/2016/079); Ministerio de Economía y Competitividad (MINECO) (FIS2015-72872-EXP, FIS2016-75618-R); Universitat Jaume I (UJI) (P1-1B2015-35, PREDOC/2013/32).

Acknowledgment

We thank Salvador Bará for the photoresist plates simulating optical aberrations used in our experiments. Author contributions: F. S. and E. T. conceived the idea; F. S. performed the experiments with the aid of P. C. and V. D.; F. S. and V. D. wrote the paper; all the authors discussed the results and revised the paper; J. L. and E. T. coordinated the project.

 

See Supplement 1 for supporting content.

REFERENCES

1. F. Zernike, “How I discovered phase contrast,” Science 121, 345–349 (1955). [CrossRef]  

2. G. Popescu, “The power of imaging with phase, not power,” Phys. Today 70(5), 34–40 (2017). [CrossRef]  

3. J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann–Shack wave-front sensor,” J. Opt. Soc. Am. A 11, 1949–1957 (1994). [CrossRef]  

4. P. Artal, “Optics of the eye and its impact in vision: a tutorial,” Adv. Opt. Photon. 6, 340–367 (2014). [CrossRef]  

5. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photon. 4, 472–553 (2012). [CrossRef]  

6. D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. A 197, 454–487 (1949). [CrossRef]  

7. M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.

8. R. Tyson, Principles of Adaptive Optics, Series in Optics and Optoelectronics, 3rd ed. (CRC Press, 2010), Vol. 20102628.

9. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3, e165 (2014). [CrossRef]  

10. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14, 374–380 (2017). [CrossRef]  

11. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014). [CrossRef]  

12. A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Opt. Express 21, 10850–10866 (2013). [CrossRef]  

13. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015). [CrossRef]  

14. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

15. H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010). [CrossRef]  

16. R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39, 6466–6469 (2014). [CrossRef]  

17. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984). [CrossRef]  

18. R. Horisaki, R. Egami, and J. Tanida, “Single-shot phase imaging with randomized light (SPIRaL),” Opt. Express 24, 3765–3773 (2016). [CrossRef]  

19. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]  

20. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]  

21. J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016). [CrossRef]  

22. P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. 38, 5204–5207 (2013). [CrossRef]  

23. B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

24. B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014). [CrossRef]  

25. R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337, 103–108 (2002). [CrossRef]  

26. D. Dayton, J. Gonglewski, B. Pierson, and B. Spielbusch, “Atmospheric structure function measurements with a Shack–Hartmann wave-front sensor,” Opt. Lett. 17, 1737–1739 (1992). [CrossRef]  

27. X. Cui, J. Ren, G. J. Tearney, and C. Yang, “Wavefront image sensor chip,” Opt. Express 18, 16685–16701 (2010). [CrossRef]  

28. L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216, 313–319 (2003). [CrossRef]  

29. Y. Saita, H. Shinto, and T. Nomura, “Holographic Shack–Hartmann wavefront sensor based on the correlation peak displacement detection method for wavefront sensing with large dynamic range,” Optica 2, 411–415 (2015). [CrossRef]  

30. S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006). [CrossRef]  

31. J. Pfund, N. Lindlein, and J. Schwider, “Dynamic range expansion of a Shack–Hartmann sensor by use of a modified unwrapping algorithm,” Opt. Lett. 23, 995–997 (1998). [CrossRef]  

32. S. Wang, P. Yang, B. Xu, L. Dong, and M. Ao, “Shack–Hartmann wavefront sensing based on binary-aberration-mode filtering,” Opt. Express 23, 5052–5063 (2015). [CrossRef]  

33. R. Martínez-Cuenca, V. Durán, V. Climent, E. Tajahuerce, S. Bará, J. Ares, J. Arines, M. Martínez-Corral, and J. Lancis, “Reconfigurable Shack–Hartmann sensor without moving elements,” Opt. Lett. 35, 1338–1340 (2010). [CrossRef]  

34. T. Godin, M. Fromager, E. Cagniot, M. Brunel, and K. Aït-Ameur, “Reconstruction-free sensitive wavefront sensor based on continuous position sensitive detectors,” Appl. Opt. 52, 8310–8317 (2013). [CrossRef]  

35. R. Navarro and E. Moreno-Barriuso, “Laser ray-tracing method for optical testing,” Opt. Lett. 24, 951–953 (1999). [CrossRef]  

36. S. Olivier, V. Laude, and J. Huignard, “Liquid-crystal Hartmann wave-front scanner,” Appl. Opt. 39, 3838–3846 (2000). [CrossRef]  

37. J.-C. Chanteloup, “Multiple-wave lateral shearing interferometry for wave-front sensing,” Appl. Opt. 44, 1559–1571 (2005). [CrossRef]  

38. “Phasics,” http://phasicscorp.com.

39. I. Iglesias, “Pyramid phase microscopy,” Opt. Lett. 36, 3636–3638 (2011). [CrossRef]  

40. A. B. Parthasarathy, K. K. Chu, T. N. Ford, and J. Mertz, “Quantitative phase imaging using a partitioned detection aperture,” Opt. Lett. 37, 4062–4064 (2012). [CrossRef]  

41. I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt. 18, 026015 (2013). [CrossRef]  

42. H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express 24, 25345–25361 (2016). [CrossRef]  

43. E. J. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008). [CrossRef]  

44. M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008). [CrossRef]  

45. H. Gong, O. Soloviev, D. Wilding, P. Pozzi, M. Verhaegen, and G. Vdovin, “Holographic imaging with a Shack–Hartmann wavefront sensor,” Opt. Express 24, 13729–13737 (2016). [CrossRef]  

46. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007). [CrossRef]  

47. P. Gao, G. Pedrini, C. Zuo, and W. Osten, “Phase retrieval using spatially modulated illumination,” Opt. Lett. 39, 3615–3618 (2014). [CrossRef]  

48. S. Chowdhury and J. Izatt, “Structured illumination quantitative phase microscopy for enhanced resolution amplitude and phase imaging,” Biomed. Opt. Express 4, 1795–1805 (2013). [CrossRef]  

49. S. Chowdhury and J. Izatt, “Structured illumination diffraction phase microscopy for broadband, subdiffraction resolution, quantitative phase imaging,” Opt. Lett. 39, 1015–1018 (2014). [CrossRef]  

50. G. Vdovin, H. Gong, O. Soloviev, P. Pozzi, and M. Verhaegen, “Lensless coherent imaging by sampling of the optical field with digital micromirror device,” J. Opt. 17, 122001 (2015). [CrossRef]  

51. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. 70, 998–1006 (1980). [CrossRef]  

52. R. Kasztelanic, A. Filipkowski, D. Pysz, R. Stepien, A. J. Waddie, M. R. Taghizadeh, and R. Buczynski, “High resolution Shack–Hartmann sensor based on array of nanostructured GRIN lenses,” Opt. Express 25, 1680–1691 (2017). [CrossRef]  

53. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photon. 7, 241–275 (2015). [CrossRef]  

54. C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014). [CrossRef]  

55. P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38, 2524–2527 (2013). [CrossRef]  

56. L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express 25, 4975–4984 (2017). [CrossRef]  

57. P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods (2017), in press. [CrossRef]  

58. F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013). [CrossRef]  

59. F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016). [CrossRef]  

60. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014). [CrossRef]  

61. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015). [CrossRef]  

62. P. R. Griffiths, H. J. Sloane, and R. W. Hannah, “Interferometers vs monochromators: separating the optical and digital advantages,” Appl. Spectrosc. 31, 485–495 (1977). [CrossRef]  

63. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012). [CrossRef]  

64. M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013). [CrossRef]  

65. W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969). [CrossRef]  

66. E. Candès and J. Romberg, “l1-magic: recovery of sparse signals via convex programming,” https://www.cs.bham.ac.uk/~axk/Sakinah/inspiring_readings/l1magic.pdf.

67. B. Lochocki, A. Gambín, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica 3, 1056–1059 (2016). [CrossRef]  

68. Y. Wu, P. Ye, I. O. Mirza, G. R. Arce, and D. W. Prather, “Experimental demonstration of an optical-sectioning compressive sensing microscope (CSM),” Opt. Express 18, 24565–24578 (2010). [CrossRef]  

69. H. Gong, T. E. Agbana, P. Pozzi, O. Soloviev, M. Verhaegen, and G. Vdovin, “Optical path difference microscopy with a Shack–Hartmann wavefront sensor,” Opt. Lett. 42, 2122–2125 (2017). [CrossRef]  

70. P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015). [CrossRef]  

71. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011). [CrossRef]  

72. E. Candès, “Compressive sampling,” in Proceedings of the International Congress of Mathematicians (European Mathematical Society, 2006), pp. 1433–1452.

73. F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015). [CrossRef]  

74. H. Dai, G. Gu, W. He, L. Ye, T. Mao, and Q. Chen, “Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation,” Opt. Express 24, 26080–26096 (2016). [CrossRef]  

75. N. Radwell, K. J. Mitchell, G. GIbson, M. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014). [CrossRef]  

76. J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 14–20 (2008). [CrossRef]  

77. D. B. Conkey, A. M. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium characterization with application to focusing light through turbid media,” Opt. Express 20, 1733–1740 (2012). [CrossRef]  

78. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. F. Zernike, “How I discovered phase contrast,” Science 121, 345–349 (1955).
    [Crossref]
  2. G. Popescu, “The power of imaging with phase, not power,” Phys. Today 70(5), 34–40 (2017).
    [Crossref]
  3. J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann–Shack wave-front sensor,” J. Opt. Soc. Am. A 11, 1949–1957 (1994).
    [Crossref]
  4. P. Artal, “Optics of the eye and its impact in vision: a tutorial,” Adv. Opt. Photon. 6, 340–367 (2014).
    [Crossref]
  5. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photon. 4, 472–553 (2012).
    [Crossref]
  6. D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. A 197, 454–487 (1949).
    [Crossref]
  7. M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.
  8. R. Tyson, Principles of Adaptive Optics, Series in Optics and Optoelectronics, 3rd ed. (CRC Press, 2010), Vol. 20102628.
  9. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3, e165 (2014).
    [Crossref]
  10. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14, 374–380 (2017).
    [Crossref]
  11. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
    [Crossref]
  12. A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Opt. Express 21, 10850–10866 (2013).
    [Crossref]
  13. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
    [Crossref]
  14. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
    [Crossref]
  15. H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
    [Crossref]
  16. R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39, 6466–6469 (2014).
    [Crossref]
  17. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
    [Crossref]
  18. R. Horisaki, R. Egami, and J. Tanida, “Single-shot phase imaging with randomized light (SPIRaL),” Opt. Express 24, 3765–3773 (2016).
    [Crossref]
  19. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014).
    [Crossref]
  20. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
    [Crossref]
  21. J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
    [Crossref]
  22. P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. 38, 5204–5207 (2013).
    [Crossref]
  23. B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).
  24. B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
    [Crossref]
  25. R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337, 103–108 (2002).
    [Crossref]
  26. D. Dayton, J. Gonglewski, B. Pierson, and B. Spielbusch, “Atmospheric structure function measurements with a Shack–Hartmann wave-front sensor,” Opt. Lett. 17, 1737–1739 (1992).
    [Crossref]
  27. X. Cui, J. Ren, G. J. Tearney, and C. Yang, “Wavefront image sensor chip,” Opt. Express 18, 16685–16701 (2010).
    [Crossref]
  28. L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216, 313–319 (2003).
    [Crossref]
  29. Y. Saita, H. Shinto, and T. Nomura, “Holographic Shack–Hartmann wavefront sensor based on the correlation peak displacement detection method for wavefront sensing with large dynamic range,” Optica 2, 411–415 (2015).
    [Crossref]
  30. S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
    [Crossref]
  31. J. Pfund, N. Lindlein, and J. Schwider, “Dynamic range expansion of a Shack–Hartmann sensor by use of a modified unwrapping algorithm,” Opt. Lett. 23, 995–997 (1998).
    [Crossref]
  32. S. Wang, P. Yang, B. Xu, L. Dong, and M. Ao, “Shack–Hartmann wavefront sensing based on binary-aberration-mode filtering,” Opt. Express 23, 5052–5063 (2015).
    [Crossref]
  33. R. Martínez-Cuenca, V. Durán, V. Climent, E. Tajahuerce, S. Bará, J. Ares, J. Arines, M. Martínez-Corral, and J. Lancis, “Reconfigurable Shack–Hartmann sensor without moving elements,” Opt. Lett. 35, 1338–1340 (2010).
    [Crossref]
  34. T. Godin, M. Fromager, E. Cagniot, M. Brunel, and K. Aït-Ameur, “Reconstruction-free sensitive wavefront sensor based on continuous position sensitive detectors,” Appl. Opt. 52, 8310–8317 (2013).
    [Crossref]
  35. R. Navarro and E. Moreno-Barriuso, “Laser ray-tracing method for optical testing,” Opt. Lett. 24, 951–953 (1999).
    [Crossref]
  36. S. Olivier, V. Laude, and J. Huignard, “Liquid-crystal Hartmann wave-front scanner,” Appl. Opt. 39, 3838–3846 (2000).
    [Crossref]
  37. J.-C. Chanteloup, “Multiple-wave lateral shearing interferometry for wave-front sensing,” Appl. Opt. 44, 1559–1571 (2005).
    [Crossref]
  38. “Phasics,” http://phasicscorp.com .
  39. I. Iglesias, “Pyramid phase microscopy,” Opt. Lett. 36, 3636–3638 (2011).
    [Crossref]
  40. A. B. Parthasarathy, K. K. Chu, T. N. Ford, and J. Mertz, “Quantitative phase imaging using a partitioned detection aperture,” Opt. Lett. 37, 4062–4064 (2012).
    [Crossref]
  41. I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt. 18, 026015 (2013).
    [Crossref]
  42. H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express 24, 25345–25361 (2016).
    [Crossref]
  43. E. J. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
    [Crossref]
  44. M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
    [Crossref]
  45. H. Gong, O. Soloviev, D. Wilding, P. Pozzi, M. Verhaegen, and G. Vdovin, “Holographic imaging with a Shack–Hartmann wavefront sensor,” Opt. Express 24, 13729–13737 (2016).
    [Crossref]
  46. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007).
    [Crossref]
  47. P. Gao, G. Pedrini, C. Zuo, and W. Osten, “Phase retrieval using spatially modulated illumination,” Opt. Lett. 39, 3615–3618 (2014).
    [Crossref]
  48. S. Chowdhury and J. Izatt, “Structured illumination quantitative phase microscopy for enhanced resolution amplitude and phase imaging,” Biomed. Opt. Express 4, 1795–1805 (2013).
    [Crossref]
  49. S. Chowdhury and J. Izatt, “Structured illumination diffraction phase microscopy for broadband, subdiffraction resolution, quantitative phase imaging,” Opt. Lett. 39, 1015–1018 (2014).
    [Crossref]
  50. G. Vdovin, H. Gong, O. Soloviev, P. Pozzi, and M. Verhaegen, “Lensless coherent imaging by sampling of the optical field with digital micromirror device,” J. Opt. 17, 122001 (2015).
    [Crossref]
  51. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. 70, 998–1006 (1980).
    [Crossref]
  52. R. Kasztelanic, A. Filipkowski, D. Pysz, R. Stepien, A. J. Waddie, M. R. Taghizadeh, and R. Buczynski, “High resolution Shack–Hartmann sensor based on array of nanostructured GRIN lenses,” Opt. Express 25, 1680–1691 (2017).
    [Crossref]
  53. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photon. 7, 241–275 (2015).
    [Crossref]
  54. C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
    [Crossref]
  55. P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38, 2524–2527 (2013).
    [Crossref]
  56. L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express 25, 4975–4984 (2017).
    [Crossref]
  57. P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods (2017), in press.
    [Crossref]
  58. F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
    [Crossref]
  59. F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
    [Crossref]
  60. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014).
    [Crossref]
  61. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
    [Crossref]
  62. P. R. Griffiths, H. J. Sloane, and R. W. Hannah, “Interferometers vs monochromators: separating the optical and digital advantages,” Appl. Spectrosc. 31, 485–495 (1977).
    [Crossref]
  63. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
    [Crossref]
  64. M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
    [Crossref]
  65. W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969).
    [Crossref]
  66. E. Candès and J. Romberg, “l1-magic: recovery of sparse signals via convex programming,” https://www.cs.bham.ac.uk/~axk/Sakinah/inspiring_readings/l1magic.pdf .
  67. B. Lochocki, A. Gambín, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica 3, 1056–1059 (2016).
    [Crossref]
  68. Y. Wu, P. Ye, I. O. Mirza, G. R. Arce, and D. W. Prather, “Experimental demonstration of an optical-sectioning compressive sensing microscope (CSM),” Opt. Express 18, 24565–24578 (2010).
    [Crossref]
  69. H. Gong, T. E. Agbana, P. Pozzi, O. Soloviev, M. Verhaegen, and G. Vdovin, “Optical path difference microscopy with a Shack–Hartmann wavefront sensor,” Opt. Lett. 42, 2122–2125 (2017).
    [Crossref]
  70. P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
    [Crossref]
  71. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011).
    [Crossref]
  72. E. Candès, “Compressive sampling,” in Proceedings of the International Congress of Mathematicians (European Mathematical Society, 2006), pp. 1433–1452.
  73. F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
    [Crossref]
  74. H. Dai, G. Gu, W. He, L. Ye, T. Mao, and Q. Chen, “Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation,” Opt. Express 24, 26080–26096 (2016).
    [Crossref]
  75. N. Radwell, K. J. Mitchell, G. GIbson, M. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
    [Crossref]
  76. J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 14–20 (2008).
    [Crossref]
  77. D. B. Conkey, A. M. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium characterization with application to focusing light through turbid media,” Opt. Express 20, 1733–1740 (2012).
    [Crossref]
  78. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997).
    [Crossref]

2017 (5)

2016 (7)

2015 (9)

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

Y. Saita, H. Shinto, and T. Nomura, “Holographic Shack–Hartmann wavefront sensor based on the correlation peak displacement detection method for wavefront sensing with large dynamic range,” Optica 2, 411–415 (2015).
[Crossref]

S. Wang, P. Yang, B. Xu, L. Dong, and M. Ao, “Shack–Hartmann wavefront sensing based on binary-aberration-mode filtering,” Opt. Express 23, 5052–5063 (2015).
[Crossref]

L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
[Crossref]

G. Vdovin, H. Gong, O. Soloviev, P. Pozzi, and M. Verhaegen, “Lensless coherent imaging by sampling of the optical field with digital micromirror device,” J. Opt. 17, 122001 (2015).
[Crossref]

M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photon. 7, 241–275 (2015).
[Crossref]

V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
[Crossref]

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref]

F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
[Crossref]

2014 (11)

N. Radwell, K. J. Mitchell, G. GIbson, M. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

S. Chowdhury and J. Izatt, “Structured illumination diffraction phase microscopy for broadband, subdiffraction resolution, quantitative phase imaging,” Opt. Lett. 39, 1015–1018 (2014).
[Crossref]

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014).
[Crossref]

P. Gao, G. Pedrini, C. Zuo, and W. Osten, “Phase retrieval using spatially modulated illumination,” Opt. Lett. 39, 3615–3618 (2014).
[Crossref]

B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
[Crossref]

R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39, 6466–6469 (2014).
[Crossref]

L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014).
[Crossref]

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3, e165 (2014).
[Crossref]

P. Artal, “Optics of the eye and its impact in vision: a tutorial,” Adv. Opt. Photon. 6, 340–367 (2014).
[Crossref]

2013 (9)

A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Opt. Express 21, 10850–10866 (2013).
[Crossref]

P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. 38, 5204–5207 (2013).
[Crossref]

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

T. Godin, M. Fromager, E. Cagniot, M. Brunel, and K. Aït-Ameur, “Reconstruction-free sensitive wavefront sensor based on continuous position sensitive detectors,” Appl. Opt. 52, 8310–8317 (2013).
[Crossref]

I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt. 18, 026015 (2013).
[Crossref]

S. Chowdhury and J. Izatt, “Structured illumination quantitative phase microscopy for enhanced resolution amplitude and phase imaging,” Biomed. Opt. Express 4, 1795–1805 (2013).
[Crossref]

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
[Crossref]

P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38, 2524–2527 (2013).
[Crossref]

M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
[Crossref]

2012 (4)

2011 (2)

2010 (4)

2008 (3)

J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 14–20 (2008).
[Crossref]

E. J. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

2007 (1)

F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007).
[Crossref]

2006 (1)

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

2005 (1)

2003 (1)

L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216, 313–319 (2003).
[Crossref]

2002 (1)

R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337, 103–108 (2002).
[Crossref]

2001 (1)

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

2000 (1)

1999 (1)

1998 (1)

1997 (1)

1994 (1)

1992 (1)

1984 (1)

N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
[Crossref]

1980 (1)

1977 (1)

1969 (1)

W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969).
[Crossref]

1955 (1)

F. Zernike, “How I discovered phase contrast,” Science 121, 345–349 (1955).
[Crossref]

1949 (1)

D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. A 197, 454–487 (1949).
[Crossref]

Agbana, T. E.

Aino, M.

Aït-Ameur, K.

Allende Motz, A. M.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Andrés, P.

Andrews, H.

W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969).
[Crossref]

Ao, M.

Arce, G. R.

Ares, J.

Arines, J.

Artal, P.

Aspden, R. S.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref]

Banerjee, P. P.

Bará, S.

Baraniuk, R. G.

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Bartels, R. A.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods (2017), in press.
[Crossref]

Bell, J. E. C.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref]

Betzig, E.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Bhaduri, B.

M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.

Bille, J. F.

Bobin, J.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Booth, M. J.

M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3, e165 (2014).
[Crossref]

Bowman, R.

Boyd, R. W.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref]

Bradley, J.

M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
[Crossref]

Bronner, M. E.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Brunel, M.

Buczynski, R.

Cagniot, E.

Candes, E.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Candes, E. J. J.

E. J. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

Candès, E.

E. Candès, “Compressive sampling,” in Proceedings of the International Congress of Mathematicians (European Mathematical Society, 2006), pp. 1433–1452.

Caravaca-Aguirre, A. M.

Chahid, M.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Chanteloup, J.-C.

Chapman, H. N.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
[Crossref]

Charpak, S.

M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
[Crossref]

Chen, Q.

Chowdhury, S.

Chu, K. K.

Chung, J.

Clemente, P.

L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express 25, 4975–4984 (2017).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref]

V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
[Crossref]

F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
[Crossref]

E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014).
[Crossref]

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
[Crossref]

P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38, 2524–2527 (2013).
[Crossref]

Climent, V.

Cohen, O.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

Conkey, D. B.

Cui, X.

Dahan, M.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Dai, H.

Davenport, M. A. A.

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Dayton, D.

de Sars, V.

M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
[Crossref]

DeLuca, J. G.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

DeLuca, K. F.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Ding, H.

Domingue, S. R.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Dong, L.

Drexler, W.

Duarte, M. F. F.

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Ducros, M.

M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
[Crossref]

Durán, V.

Edgar, M.

Egami, R.

Eldar, Y. C.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

Eluru, G.

Engerer, P.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Fernández-Alonso, M.

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
[Crossref]

Field, J. J.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods (2017), in press.
[Crossref]

Filipkowski, A.

Ford, T. N.

Fromager, M.

Fusco, T.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

Gabor, D.

D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. A 197, 454–487 (1949).
[Crossref]

Gambín, A.

Gao, P.

GIbson, G.

Gillette, M. U.

Godin, T.

Goelz, S.

Gong, H.

Gonglewski, J.

Gorthi, S. S.

Goulam Houssen, Y.

M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
[Crossref]

Griffiths, P. R.

Grimm, B.

Gu, G.

Hannah, R. W.

He, W.

Horisaki, R.

Horstmeyer, R.

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

Hradil, Z.

B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
[Crossref]

Huignard, J.

Hunt, J.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Iglesias, I.

I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt. 18, 026015 (2013).
[Crossref]

I. Iglesias, “Pyramid phase microscopy,” Opt. Lett. 36, 3636–3638 (2011).
[Crossref]

Irles, E.

Izatt, J.

Ji, N.

N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14, 374–380 (2017).
[Crossref]

Kane, J.

W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969).
[Crossref]

Kasztelanic, R.

Kelly, K. F. F.

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Krishna, S.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Kumar, A.

Lancis, J.

L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express 25, 4975–4984 (2017).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref]

B. Lochocki, A. Gambín, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica 3, 1056–1059 (2016).
[Crossref]

V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
[Crossref]

F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
[Crossref]

E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014).
[Crossref]

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
[Crossref]

P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38, 2524–2527 (2013).
[Crossref]

R. Martínez-Cuenca, V. Durán, V. Climent, E. Tajahuerce, S. Bará, J. Ares, J. Arines, M. Martínez-Corral, and J. Lancis, “Reconfigurable Shack–Hartmann sensor without moving elements,” Opt. Lett. 35, 1338–1340 (2010).
[Crossref]

Laska, J. N. N.

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Laude, V.

Leitgeb, R. A.

Levi, D. H.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Li, X.

Liang, J.

Liesener, J.

L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216, 313–319 (2003).
[Crossref]

Lindlein, N.

Lipworth, G.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Lochocki, B.

Lu, H.

Manzanera, S.

Mao, T.

Martínez-Corral, M.

Martínez-Cuenca, R.

Martínez-León, L.

Mertz, J.

Miao, J.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

Michau, V.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

Milkie, D. E.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Millet, L.

Mir, M.

Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011).
[Crossref]

M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.

Mirza, I. O.

Misgeld, T.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Mitchell, K. J.

Montoya, J.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Moreno-Barriuso, E.

Mori, Y.

Morris, P. A.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref]

Motka, L.

B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
[Crossref]

Mousavi, H. S.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Mumm, J.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Navarro, R.

Nehmetallah, G.

Nicolle, M.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

Nomura, T.

Nugent, K. A.

H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
[Crossref]

Ogura, Y.

Olivier, S.

Osten, W.

Ou, X.

Padgett, M. J.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. GIbson, M. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Padilla, W. J.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Parthasarathy, A. B.

Pedrini, G.

Pfund, J.

Pierson, B.

Piestun, R.

Platt, B. C.

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

Popescu, G.

G. Popescu, “The power of imaging with phase, not power,” Phys. Today 70(5), 34–40 (2017).
[Crossref]

Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011).
[Crossref]

M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.

Pozzi, P.

Prather, D. W.

Pratt, W.

W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969).
[Crossref]

Pysz, D.

Radwell, N.

Ramchandran, K.

Rehacek, J.

B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
[Crossref]

Ren, J.

Rogers, J.

Romberg, J.

J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 14–20 (2008).
[Crossref]

Rousset, G.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

Saita, Y.

Salvador-Balaguer, E.

F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
[Crossref]

Sánchez-Soto, L. L.

B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
[Crossref]

Saxena, A.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Saxena, M.

Schwider, J.

Segev, M.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

Seifert, L.

L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216, 313–319 (2003).
[Crossref]

Shack, R.

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

Shechtman, Y.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

Shinto, H.

Shrekenhamer, D.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Sleasman, T.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Sloane, H. J.

Smith, D. R.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Soldevila, F.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref]

V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
[Crossref]

F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
[Crossref]

E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014).
[Crossref]

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
[Crossref]

Soloviev, O.

Southwell, W. H.

Spielbusch, B.

Squier, J. A.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Stepien, R.

Stockton, P. A.

P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods (2017), in press.
[Crossref]

Stoklasa, B.

B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
[Crossref]

Streibl, N.

N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
[Crossref]

Studer, V.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Taghizadeh, M. R.

Tajahuerce, E.

L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express 25, 4975–4984 (2017).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref]

B. Lochocki, A. Gambín, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica 3, 1056–1059 (2016).
[Crossref]

V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
[Crossref]

F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
[Crossref]

E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014).
[Crossref]

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
[Crossref]

P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38, 2524–2527 (2013).
[Crossref]

R. Martínez-Cuenca, V. Durán, V. Climent, E. Tajahuerce, S. Bará, J. Ares, J. Arines, M. Martínez-Corral, and J. Lancis, “Reconfigurable Shack–Hartmann sensor without moving elements,” Opt. Lett. 35, 1338–1340 (2010).
[Crossref]

Takhar, D.

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Tanida, J.

Tearney, G. J.

Thomas, S.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

Tian, L.

Tiziani, H. J.

L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216, 313–319 (2003).
[Crossref]

Tokovinin, A.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

Tyson, R.

R. Tyson, Principles of Adaptive Optics, Series in Optics and Optoelectronics, 3rd ed. (CRC Press, 2010), Vol. 20102628.

Unarunotai, S.

Uribe-Patarroyo, N.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref]

Vargas-Martin, F.

I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt. 18, 026015 (2013).
[Crossref]

Vdovin, G.

Verhaegen, M.

Waddie, A. J.

Wakin, M. B.

E. J. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

Waller, L.

Wang, K.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Wang, R.

M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.

Wang, S.

Wang, Z.

Watts, C. M.

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Wernsing, K. A.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Wilding, D.

Wilson, R. W.

R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337, 103–108 (2002).
[Crossref]

Wu, Y.

Xu, B.

Yamaguchi, I.

Yang, C.

Yang, P.

Ye, L.

Ye, P.

Young, M. D.

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Zernike, F.

F. Zernike, “How I discovered phase contrast,” Science 121, 345–349 (1955).
[Crossref]

Zhang, F.

F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007).
[Crossref]

Zhang, T.

Zheng, G.

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

Zhu, R.

M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.

Zuo, C.

Adv. Opt. Photon. (3)

Appl. Opt. (3)

Appl. Phys. B (1)

F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013).
[Crossref]

Appl. Spectrosc. (1)

Biomed. Opt. Express (2)

IEEE Signal Process. Mag. (4)

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015).
[Crossref]

E. J. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008).
[Crossref]

M. F. F. Duarte, M. A. A. Davenport, D. Takhar, J. N. N. Laska, K. F. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 14–20 (2008).
[Crossref]

J. Biomed. Opt. (1)

I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt. 18, 026015 (2013).
[Crossref]

J. Opt. (1)

G. Vdovin, H. Gong, O. Soloviev, P. Pozzi, and M. Verhaegen, “Lensless coherent imaging by sampling of the optical field with digital micromirror device,” J. Opt. 17, 122001 (2015).
[Crossref]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (1)

J. Refract. Surg. (1)

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

Light Sci. Appl. (1)

M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3, e165 (2014).
[Crossref]

Mon. Not. R. Astron. Soc. (2)

R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337, 103–108 (2002).
[Crossref]

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371, 323–336 (2006).
[Crossref]

Nat. Commun. (2)

B. Stoklasa, L. Motka, J. Rehacek, Z. Hradil, and L. L. Sánchez-Soto, “Wavefront sensing reveals optical coherence,” Nat. Commun. 5, 3275 (2014).
[Crossref]

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref]

Nat. Methods (2)

N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14, 374–380 (2017).
[Crossref]

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014).
[Crossref]

Nat. Photonics (3)

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
[Crossref]

C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014).
[Crossref]

Opt. Commun. (2)

N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
[Crossref]

L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216, 313–319 (2003).
[Crossref]

Opt. Express (14)

S. Wang, P. Yang, B. Xu, L. Dong, and M. Ao, “Shack–Hartmann wavefront sensing based on binary-aberration-mode filtering,” Opt. Express 23, 5052–5063 (2015).
[Crossref]

R. Horisaki, R. Egami, and J. Tanida, “Single-shot phase imaging with randomized light (SPIRaL),” Opt. Express 24, 3765–3773 (2016).
[Crossref]

A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Opt. Express 21, 10850–10866 (2013).
[Crossref]

L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express 25, 4975–4984 (2017).
[Crossref]

E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945–16955 (2014).
[Crossref]

V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
[Crossref]

Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011).
[Crossref]

Y. Wu, P. Ye, I. O. Mirza, G. R. Arce, and D. W. Prather, “Experimental demonstration of an optical-sectioning compressive sensing microscope (CSM),” Opt. Express 18, 24565–24578 (2010).
[Crossref]

R. Kasztelanic, A. Filipkowski, D. Pysz, R. Stepien, A. J. Waddie, M. R. Taghizadeh, and R. Buczynski, “High resolution Shack–Hartmann sensor based on array of nanostructured GRIN lenses,” Opt. Express 25, 1680–1691 (2017).
[Crossref]

H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express 24, 25345–25361 (2016).
[Crossref]

X. Cui, J. Ren, G. J. Tearney, and C. Yang, “Wavefront image sensor chip,” Opt. Express 18, 16685–16701 (2010).
[Crossref]

H. Gong, O. Soloviev, D. Wilding, P. Pozzi, M. Verhaegen, and G. Vdovin, “Holographic imaging with a Shack–Hartmann wavefront sensor,” Opt. Express 24, 13729–13737 (2016).
[Crossref]

D. B. Conkey, A. M. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium characterization with application to focusing light through turbid media,” Opt. Express 20, 1733–1740 (2012).
[Crossref]

H. Dai, G. Gu, W. He, L. Ye, T. Mao, and Q. Chen, “Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation,” Opt. Express 24, 26080–26096 (2016).
[Crossref]

Opt. Lett. (13)

I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997).
[Crossref]

I. Iglesias, “Pyramid phase microscopy,” Opt. Lett. 36, 3636–3638 (2011).
[Crossref]

A. B. Parthasarathy, K. K. Chu, T. N. Ford, and J. Mertz, “Quantitative phase imaging using a partitioned detection aperture,” Opt. Lett. 37, 4062–4064 (2012).
[Crossref]

P. Gao, G. Pedrini, C. Zuo, and W. Osten, “Phase retrieval using spatially modulated illumination,” Opt. Lett. 39, 3615–3618 (2014).
[Crossref]

S. Chowdhury and J. Izatt, “Structured illumination diffraction phase microscopy for broadband, subdiffraction resolution, quantitative phase imaging,” Opt. Lett. 39, 1015–1018 (2014).
[Crossref]

H. Gong, T. E. Agbana, P. Pozzi, O. Soloviev, M. Verhaegen, and G. Vdovin, “Optical path difference microscopy with a Shack–Hartmann wavefront sensor,” Opt. Lett. 42, 2122–2125 (2017).
[Crossref]

P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38, 2524–2527 (2013).
[Crossref]

R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39, 6466–6469 (2014).
[Crossref]

R. Martínez-Cuenca, V. Durán, V. Climent, E. Tajahuerce, S. Bará, J. Ares, J. Arines, M. Martínez-Corral, and J. Lancis, “Reconfigurable Shack–Hartmann sensor without moving elements,” Opt. Lett. 35, 1338–1340 (2010).
[Crossref]

R. Navarro and E. Moreno-Barriuso, “Laser ray-tracing method for optical testing,” Opt. Lett. 24, 951–953 (1999).
[Crossref]

J. Pfund, N. Lindlein, and J. Schwider, “Dynamic range expansion of a Shack–Hartmann sensor by use of a modified unwrapping algorithm,” Opt. Lett. 23, 995–997 (1998).
[Crossref]

D. Dayton, J. Gonglewski, B. Pierson, and B. Spielbusch, “Atmospheric structure function measurements with a Shack–Hartmann wave-front sensor,” Opt. Lett. 17, 1737–1739 (1992).
[Crossref]

P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. 38, 5204–5207 (2013).
[Crossref]

Optica (4)

Phys. Rev. A (1)

F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007).
[Crossref]

Phys. Today (1)

G. Popescu, “The power of imaging with phase, not power,” Phys. Today 70(5), 34–40 (2017).
[Crossref]

Proc. IEEE (1)

W. Pratt, J. Kane, and H. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969).
[Crossref]

Proc. Natl. Acad. Sci. USA (3)

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

M. Ducros, Y. Goulam Houssen, J. Bradley, V. de Sars, and S. Charpak, “Encoded multisite two-photon microscopy,” Proc. Natl. Acad. Sci. USA 110, 13138–13143 (2013).
[Crossref]

J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. Allende Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, “Superresolved multiphoton microscopy with spatial frequency-modulated imaging,” Proc. Natl. Acad. Sci. USA 113, 6605–6610 (2016).
[Crossref]

Proc. R. Soc. A (1)

D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. A 197, 454–487 (1949).
[Crossref]

Sci. Rep. (2)

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref]

F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5, 14300 (2015).
[Crossref]

Science (1)

F. Zernike, “How I discovered phase contrast,” Science 121, 345–349 (1955).
[Crossref]

Other (6)

M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” in Progress in Optics (Elsevier, 2012), Vol. 57, pp. 133–217.

R. Tyson, Principles of Adaptive Optics, Series in Optics and Optoelectronics, 3rd ed. (CRC Press, 2010), Vol. 20102628.

P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods (2017), in press.
[Crossref]

E. Candès and J. Romberg, “l1-magic: recovery of sparse signals via convex programming,” https://www.cs.bham.ac.uk/~axk/Sakinah/inspiring_readings/l1magic.pdf .

E. Candès, “Compressive sampling,” in Proceedings of the International Congress of Mathematicians (European Mathematical Society, 2006), pp. 1433–1452.

“Phasics,” http://phasicscorp.com .

Supplementary Material (1)

NameDescription
» Supplement 1       Supplemental document

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Complex amplitude retrieval using spatial wavefront sampling compared to current imaging and wavefront sensing techniques. (a) In a conventional imaging system, the phase information carried by the light coming from an object (for example, a biological sample) is completely lost, as the object image is formed onto a detector that simply measures the light irradiance. (b) Digital holography captures the interference between the light coming from the object and a reference beam, allowing one to retrieve the complex amplitude of the object from irradiance measurements. (c) Shack–Hartmann sensors measure the displacements of different foci generated by an array of lenslets. From those displacements, the wavefront impinging onto the lenslet plane can be reconstructed, providing information about the phase variations introduced by the object. (d) Our technique uses a sequence of illumination amplitude patterns and a single focusing lens. The object’s complex amplitude can be obtained from the statistical properties of the irradiance distribution measured in the Fourier plane of the lens. The same operation principle can be applied if, instead of the configuration shown in the figure, the light coming from an object is modulated by a set of patterns generated on a spatial light modulator (SLM).
Fig. 2.
Fig. 2. Operation principle of different wavefront sensing approaches. (a) Shack–Hartmann wavefront sensing. A wavefront coming from an object passes through an array of lenslets, which produce a distribution of focal spots on the detector. The position of each spot is linked to the local slope of the wavefront on every lenslet. Numerical integration of the slope data provides the phase of the wavefront. (b) Raster scanning wavefront sensing. A small amplitude aperture is moved over the wavefront plane and, for each consecutive scanning position, the Fourier distribution of the emerging light is recorded. The total irradiance of that distribution at each aperture position can be used to recover the amplitude image of the object. Additionally, the centroid relative location of each Fourier distribution provides the local slope of the wavefront at each scanning position, which allows one to obtain the phase image of the object. (c) Spatial wavefront sampling. Instead of using a small scanning aperture, the wavefront coming from the object is sampled by a set of amplitude masks (reconstruction basis). Now, from the irradiance of each Fourier distribution generated on the detector plane, one can measure the mathematical projection of the object amplitude into the reconstruction basis. By demultiplexing that information, the object amplitude is spatially resolved. In the same way, demultiplexing the data concerning the centroid locations provides the slope map of the wavefront and then, after numerical integration, the object phase image.
Fig. 3.
Fig. 3. Experimental verification of the proposed technique. (a) Schematics of the system. Captions: LS, laser source; L1, L2, and L3, lenses; OBJ, object; CL, condensing lens; PD, position detector. (b) Image of the detector used in our experiments. It includes four electrodes connected to a metallic surface, whose voltage measurements provide both the irradiance of the light beam and the position of its centroid. (c) Experimental results for a plate simulating a coma aberration. As the sample is transparent, the amplitude image provides no information about the object. In the phase image, the object information is clearly recovered. (d) 3D view of the recovered phase.
Fig. 4.
Fig. 4. Reconstruction of a spherical wavefront and comparison with a commercial Shack–Hartmann sensor. (a) Caption of the lens used as a phase object. (b) Spot map generated by the SH lenslet array. By comparing the position of each spot to a reference value, the spot displacements can be calculated. From those data, a 3D view of the phase can be recovered. (c) Wrapped phase of the lens. (d). Small region of the full displacement map for our system. Those displacements can be related to the gradient of the phase. Numerical integration of those gradients provides the phase of the object, and after decomposition in the Zernike basis, a high-resolution 3D view of the phase can be displayed (plot on the right). (e) Wrapped phase of the lens obtained using the proposed technique.
Fig. 5.
Fig. 5. High dynamic-range measurement and comparison with a commercial SH sensor. In the top row, we show the wrapped phase measured with the aid of a SH sensor when the lens shown in Fig. 4(a) is placed in an off-axis position. The maximum phase gradient measurable by the commercial sensor is 100λ. A zoomed region, marked in red, is shown in the right part of the figure. In the bottom row, we show the results for the same object obtained with the system shown in Fig. 3(a). In this case, the lens has been displaced a bigger distance from the optical axis. The phase gradient measured is 217λ. A zoomed region, marked in blue, is shown in the right part of the figure. Given the small size of both figures, some Moiré artifacts appear in the images on the left that are not part of the fringe pattern of a spherical phase gradient (see Supplement 1).
Fig. 6.
Fig. 6. Evolution of reconstruction quality when CS is used for wavefront recovery. For different values of the compression ratio (k/N2, k being the number of projected patterns), we present the correlation coefficient between the CS estimation and the recovery without using undersampling. Here, the images have a resolution of 64×64  pixels, so N2=4096. The points represent the median of the correlation coefficient for 100 different realizations of the algorithm (each realization corresponds to a different subset of random measurements).
Fig. 7.
Fig. 7. Comparison with phase-shifting interferometry. (a) Photograph of the photoresist layer used for the experiment. The black square represents the region that will be imaged, consisting of zones with and without photoresist material. (b) Amplitude and phase images obtained with the proposed technique. Due to the absorption properties of the object, the amplitude image presents poor quality. However, in the phase image, fine details of the sample can be observed. (c) 3D representation of the obtained phase. (d) Phase image obtained with phase-shifting interferometry for the same region of the object. In the inset, we show the region between the two small holes present in one of the photoresist bars. (e) Physical profile of the photoresist strip between the two holes obtained with an optical profilometer (Sensofar Plμ 2300).

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

Δ=(Δx,Δy)=λf2πϕ,
S=I2(x,y)dxdy,
Δ=(ΔxΔy)=1S(x·I2(x,y)dxdyy·I2(x,y)dxdy).
Δ=(Δx;Δy)=λf2πSA2(x,y)ϕ(x,y)dxdy.
A(x,y)=K·rect(xaL)·rect(ybL),
Δx=λfK22πSy=bL/2y=b+L/2x=aL/2x=a+L/2x[ϕ(x,y)]dxdy;Δy=λfK22πSy=bL/2y=b+L/2x=aL/2x=a+L/2y[ϕ(x,y)]dxdy.
Δ=(Δx;Δy)=K2λf2πL2Sa,bϕ(x,y),
Δ=(Δx;Δy)=λf2πa,bϕ(x,y).
y=Mx,
Δ=λf2πMϕ.
α=argminα1such thatϕα=y.

Metrics