## Abstract

We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

## 1. INTRODUCTION

Compressive sensing (CS) acquires information with an efficient number of measurements using the fact that signals are often sparse in some appropriate representation basis [1,2]. This permits, for example, an $N$ pixel image to be acquired with $M<N$ appropriate measurements and then reconstructed computationally. Compressive imaging (CI) is one of the main applications of CS. Rather than acquiring a large amount of data and then compressing the image to a smaller size, CS allows the image to be acquired efficiently. This development is desirable, given the potential size of the raw data stream of, for example, a high-resolution polarization and hyperspectral image.

The prevailing CI architecture uses a digital micromirror device (DMD) in the image plane of the imaging system. It is reconfigured to direct different spatial distributions of the incident light to a single pixel detector. Single-pixel architectures are attractive because they allow imaging at wavelengths where high resolution detectors are unavailable or financially prohibitive [3], and utilize the wide wavelength performance and high frame rate of the DMD. Color CI has been considered in a number of different ways including using multiple single-pixel detectors combined with dichroic elements [4] or with color wheels in a field sequential arrangement [5]. These physical implementations, however, constrain the CI problems that can be considered [5].

A disadvantage of single-pixel cameras is that the measurements are acquired sequentially, limiting the frame rate and effective shutter speed of the acquired image. Parallel CI architectures, where all of the measurements are acquired in one shot, have also been considered [6]. As images get larger, sequential acquisition becomes less attractive because of frame rate limitations. Moreover, around visible wavelengths, where high-resolution multipixel detectors are readily available (CCD and CMOS camera sensors), the traditional advantage of single-pixel detectors is eroded. However, we believe the architecture proposed here has the potential to use such detectors more efficiently to acquire higher resolution images than conventional methods.

A number of different one-shot CI techniques have been considered. For example, an appropriately designed mask can be inserted between the image and focal plane and a higher resolution image calculated [7–9]. For one-shot spectral imaging applications, where CI techniques are applied across color rather than space, a mask can be inserted in part of the optical path with high spectral dispersion [10].

Here we have proposed a CI architecture that uses a hologram at the image plane to direct light from different image points toward an array of photodetectors in known ratios. We have demonstrated monochromatic imaging, but this architecture is well positioned to be expanded to spectral as well as spatial CI. Furthermore, this method could be applied beyond visible wavelengths, including in terahertz CI [11].

Before explaining our concept, we will briefly summarize the prevalent implementation of single-pixel CI to demonstrate that our proposed method effectively parallelizes it. We will then describe the hologram design requirements. We have presented experimental results that demonstrate that the linear algebra required can be implemented and finally will discuss how this method could be applied.

#### A. Compressive Imaging Outline

In single-pixel imaging with a DMD, each of the $N$ pixels consists of a mirror in one of two states: either it does or does not direct the light falling upon it to a detector for each of the $M$ sequential measurements. This concept is represented by

Recovering $\mathbf{I}$ from $\mathbf{G}$ involves solving the underconstrained linear algebra problem of Eq. (1) by using the fact that in some appropriate basis the vector $\mathbf{I}$ will be sparse. In this work, we used the sparsifying discrete cosine transform matrix $\mathbf{D}$ to transform the image $\mathbf{S}=\mathbf{DI}$. The linear algebra problem of Eq. (1) can then be recast as

hence $\mathbf{A}={\mathbf{HD}}^{\u22ba}$ as $\mathbf{D}$ is a unitary operator. An estimate of the sparse representation ${\mathbf{S}}^{\prime}$ is then found subject to an appropriate constraint. For imaging applications this is commonly minimizing the ${l}^{1}$ norm to find an estimate for $\mathbf{S}$ in Eq. (2). The image can then be resynthesized by ${\mathbf{I}}^{\prime}={\mathbf{D}}^{\u22ba}{\mathbf{S}}^{\prime}$.The choice of the DCT matrix for $\mathbf{D}$ is arbitrary. We chose it because it is found in many of the canonical expositions on CI [2]. Our work shows that the measurement linear algebra of CI can be implemented in a synchronous architecture using holograms. This acquisition method is compatible with many recovery methods.

## 2. PROPOSED ARCHITECTURE

In this one-shot implementation, $M$ different detectors are required, as opposed to the single-pixel detector used sequentially in the DMD case. Each image pixel is then required to direct the light falling upon it to each of the detectors in a given distribution. Figure 1(b) is a schematic where each of the $N$ image pixels direct light in known ratios—given by the columns of the matrix $\mathbf{H}$—toward the $M$ detectors. Thus, the rows of the matrix $\mathbf{H}$ give the contribution to each detector.

#### A. Implementation with Holograms

In this method, we have proposed to make each pixel in the image plane a (sub-)hologram designed to direct light incident on it to the detectors in the appropriate ratio [12]. Therefore, each sub-hologram defines a single column of the matrix $\mathbf{H}$. The sub-holograms can be designed for arbitrary positioning of the detectors in the replay field.

In the present work we have considered Fourier holograms. Fourier holography is a special case of coherent diffractive optics. The replay field of a Fourier hologram is simply the 2D spatial Fourier transform of the complex amplitude distribution of the light at the hologram [13]. The replay field is arrived at either (1) by letting light propagate for a sufficiently long distance—the larger the feature size, the longer the distance—from the hologram such that the optical far field is reached; or (2) by using a simple lens, which renders the replay field at the focal plane of the lens. We used Fourier holograms in this work for two principle reasons.

First, Fourier holograms are relatively straightforward to compute, being generated by a 2D discrete Fourier transform (DFT). This simple relationship assists design by global optimization methods. Nonetheless, designing the holograms is still a significant computational undertaking.

Second, as the feature size of the technology implementing the hologram becomes smaller the distance from the hologram to the optical far field decreases to the point that a lens-less system could be envisaged with Fourier holograms. However, in the present system the use of an SLM with a relatively course pixel pitch requires the use of a Fourier lens to generate the replay field within a reasonable distance.

Figure 2(a) shows an example of such a sub-hologram. In this case, it is a phase hologram that produces a number of spots in the replay field. These have been designed to have known magnitudes given by the columns of the chosen measurement matrix $\mathbf{H}$. Using the method proposed in this paper, any matrix $\mathbf{H}$ of continuous real positive scalar values is possible. An array of these sub-holograms is tiled together to form a hologram as shown in Fig. 2(b). Each sub-hologram produces a replay field with spots of different magnitudes that combine to give the elements of the vector $\mathbf{G}$.

It is important at this point to recognize that (spatially) coherent light is required by the holograms. Hence, we must consider the replay fields as complex amplitude distributions. However, we are considering the problem of measuring intensities $\mathbf{G}$ and recovering an intensity image $\mathbf{I}$.

As an aside we briefly considered the problem in terms of amplitude, rather than intensity, to demonstrate why it is not a good idea. Consider solving the associated underconstrained linear algebra problem

where $\mathbf{h}$ is explicitly complex and $\mathbf{i}$ is the complex light distribution we are attempting to measure. This approach has two main problems. First, our detectors measure the intensity ${|\mathbf{g}|}^{2}$ and solving this linear problem without the phase of $\mathbf{g}$ is difficult, with generally poor results [14]. Second, it requires each spot due to each sub-hologram to have a very well defined phase, which is difficult to achieve both experimentally and in terms of hologram design.To implement the intensity problem of Eq. (1) we require the complex amplitudes of each overlapping spot due to each sub-hologram to add as if we are adding intensities. Note the spots are in fact extended regions with structure, as clearly visible in Fig. 2. We can express this requirement for any pair of overlapping spots due to two different sub-holograms at the same detector site as

- 1. The phase in a given spot is uniformly distributed, such that the mean phasor is zero ${\int}_{\mathrm{spot}}{e}^{j\alpha}=0$.
- 2. The amplitude $a$ and phase $\alpha $ of any given spot for a given sub-hologram replay field are independent random variables.
- 3. The phase and amplitude distributions of the spots in the replay fields of the different sub-holograms are independent.

Provided that these conditions are met, we can ignore the fact that we are really dealing with complex amplitudes and simply consider the standard compressed sensing linear algebra of Eq. (1). Meeting this requirement is one of the main difficulties in implementing this technique. In Section 3.A we discuss how we have achieved this in the current implementation.

## 3. EXPERIMENTAL DEMONSTRATION

To demonstrate this technique, we designed a hologram and displayed it on a reconfigurable liquid crystal (LC) SLM. Figure 3(a) shows an outline of the experimental setup. The SLM is a $1024\times 1280$ CRL OPTO nematic LC reflective SLM. The hologram consisted of $10\times 10$ sub-holograms of $100\times 100$ pixels each. It was illuminated at a small angle by a collimated fiber-coupled diode laser ($\lambda =635\text{\hspace{0.17em}}\mathrm{nm}$). We then used a $2f$ system to form the replay field on a camera sensor array (Canon 550D), as shown in Fig. 3(b). We used a mask to block out the zero-order peak and bands at zero horizontal and vertical frequency due to the structure of the SLM. In an ideal system, there would be no zero order, but the inter-pixel dead space of this SLM prohibits it. The horizontal and vertical zero-frequency features are due to the pixel shape and SLM border-defining aperture functions. The intensity over a large number of pixels was binned to reproduce the effect of having an array of photo detectors. By choosing how many of the image pixels and detector spots to use, we could define the resolution and the size of the matrix $\mathbf{H}$.

While it was unnecessary to use a reconfigurable SLM in this application, it did have the advantage that each of the sub-holograms could be displayed separately. This allowed the measurement matrix $\mathbf{H}$ to be directly measured. Figure 3(c) shows the results of this measurement. Clearly it is not the random matrix as desired by CI, but has horizontal and vertical artifacts. The vertical stripes are due to the (near) Gaussian illumination profile from the laser. Moving across the image pixels (sub-holograms), these are brighter toward the center of the beam and dimmer toward the edge. The horizontal stripes are due to the apodization of the SLM. The fundamental pixel shape on the SLM corresponds to a sinc envelope in the replay field. Going down the matrix we move through the different spots in the replay field. Generally these are dimmer as they are further away from zero frequency. These features can be compensated for when recovering the image.

#### A. Hologram Design

We designed the hologram with a custom simulated annealing (SA) program written in Matlab. The replay field was designed to place the spots away from the masked out regions. Initially the sub-holograms were designed independently. The structure of each spot was allowed to initially evolve freely, with only the total power inside it considered when evaluating the error of a candidate hologram. This yielded quick convergence to good holograms due to the large degree of freedom afforded. Once we obtained a reasonable amplitude distribution, it was then important to ensure that the conditions of Section 2.A were met. The initial sub-holograms were quite close to satisfying the requirement of Eq. (4). The random aspect of the SA design process means that the spots due to different sub-holograms were uncorrelated; and the unconstrained structure of the spot means that the amplitude and phase were weakly related, with a mean phasor of approximately zero. However, they were still not optimal.

To improve performance, we applied further SA optimization to the whole hologram. We chose a random subset of the sub-holograms (50%). We then evaluated the error between the sum of the intensities of the spots due to each sub-hologram, and the intensity of the spot due to all of the chosen sub-holograms. This error value, combined with considering any deleterious effects on the intensity distribution, was used to determine whether or not a given random change should be kept.

Figure 4 shows the effect of this optimization. The error in Eq. (4) was reduced as the simulated annealing process was repeatedly applied to the hologram. Examples of image recovery at different stages are shown, both grayscale and thresholded. Recovery of the more complex grayscale images was challenging, largely because CI is not particularly effective at such low resolutions. We used the L1TestPack software [15,16] for image recovery. This demonstrated that we could optimize the hologram to compensate for the complex nature of the replay fields.

There is significant room to explore improving this algorithm, either by adjusting the SA parameters or using a more sophisticated statistical approach.

It should be noted that designing high-resolution holograms with the required replay field properties is a significant computational undertaking. However, they only need to be designed once for a given implementation.

#### B. Results

To test the system, we created images by masking regions of the hologram on the SLM by setting them to constant phase. This caused the light landing on these flat-phase regions to be directed to the zero-order, rendering these pixels dark as far as the spots in the replay field were concerned.

Figure 5 shows some experimental results. In Fig. 5(a), for the overconstrained case (where Eq. (1) has $\mathbf{I}$ overconstrained by $\mathbf{G}$), we see excellent recovery. This result suggests that the linear algebra was working as expected. For the case of a moderate compression ratio we saw a small degradation in image quality. In Fig. 5(b), we saw excellent image recovery for the case of a target that is highly sparse. Note that the thresholding applied in the third panel was at half of the nominal maximum intensity.

Again, we used L1TestPack [15,16] for the image recovery, with no denoising procedures. At these low resolutions, the image representations were not particularly sparse; therefore, the gain from CS was small, and the performance was highly image-dependent.

This work demonstrates that this technique implements the linear algebra platform required by CI with sufficient accuracy for image recovery. Noise-tolerant variants of the reconstruction algorithm could be used to recover images from noisy data. Moreover, how the deterministic noise due to the violation of the requirement of Eq. (4) affects the CI system, as well as the general effect of hologram quality, is an interesting question. Perhaps it is possible to characterize the deterministic noise and use that to improve recovery. However, this is beyond the scope of the present work.

## 4. DISCUSSION

This CI implementation has a number of advantages. The use of a hologram gives a lot of freedom in the design of the replay field, permitting a choice of a continuous matrix $\mathbf{H}$ rather than a binary matrix if a DMD is used. Furthermore, the hologram can be designed for an arbitrary detector position, as has been capitalized on in this work. This would be particularly desirable for broad wavelength range scenarios as appropriate detectors for different wavelengths could be spatially separated.

However there are some limitations to this technique arising from the use of holograms as the optical element. We now candidly outline these limitations and suggest methods to overcome them.

#### A. Limitations of the Technique

First, coherent light of some degree is required, reducing the number of applications where this architecture would be appropriate. Second, the use of a holographic element requires a low numerical aperture imaging system to avoid the spots spreading out due to varied incoming light angles. A pinhole camera would help satisfying these first two requirements. Third, the effective detector resolution is limited by the fact that the spot array is diffraction limited. Finally, high spatial frequency images can cause a breakdown in the technique. We discuss these final two issues in more detail in the next two sections.

### 1. Maximum Attainable Resolution

Consider that each image pixel is a sub-hologram with dimension $d$. This size defines an effective aperture for the corresponding replay field. The diffraction limited spot size due to this aperture is $u=\frac{1}{d}$ in units of spatial frequency. Spatial frequency is related to physical displacement in the replay field by

where $R$ is either the focal length of the lens in a $2f$ system or the distance from the hologram to where the replay field is being considered in a Fraunhofer system [13]. Hence, we have a spot area of If we consider that the maximum physically possible replay field area is that of a half-sphere of radius $R$, then the total number of diffraction limited spots that can fit into the replay field is### 2. Effect of Partial Hologram Illumination

It is also worth considering that any image falling on the hologram is unlikely to consist of pixels perfectly aligned with the sub-holograms. Instead the pixels will be to some extent partially or unevenly illuminated. Consider a particular sub-hologram with complex amplitude transmission function $t(x,y)$, which in the Fourier plane produces the distribution $T(u,v)=\mathcal{F}\{t(x,y)\}$ (to within a constant factor). We can consider the pixel illumination (a segment of the total image) as another function $l(x,y)$. Thus the light field on the hologram is $lt$, and in the replay field we have the distribution $T*L$, where $*$ represents convolution and $L(u,v)=\mathcal{F}\{l(x,y)\}$.

The higher the spatial frequency of the input, the broader the convolution kernel $L$. Depending on the separation distance of the spots in the replay field, this will lead to crosstalk. This shows that, like all imaging systems, we are bandwidth limited. However, unlike nonholographic imaging systems (either conventional or CS) the penalty for high frequency images is not imaging artifacts (such as Moiré patterns), but a breakdown of the linear algebra system. A low-pass filter on the input would be required for a general imaging application, or alternatively spot separation would have to be chosen based on the system’s resolving power.

#### B. Overcoming Limitations

While clearly there is an advantage of $\frac{M}{N}$ fewer detectors required compared to a traditional imaging architecture, the benefit is not as dramatic as the $N$ advantage offered by single-pixel compressive imaging, albeit with $M$ fewer measurements. Furthermore, in Eq. (11) we demonstrated that high resolutions are not achievable with this holographic architecture alone. We envision this method becoming relevant when incorporated with two other technologies.

First, the implementation of the hologram should take advantage of evolving research into flat optics and optical metasurfaces [17]. This has already been done for 1D imaging at microwave frequencies [18]. One could implement holograms with a technology that is wavelength and polarization selective [19,20] and then multiplex different holograms. Then the spots in the replay field could contain contributions not just from known positions in the image, but known wavelengths and polarization states. This would allow synchronous compressive imaging of the form $\mathbf{G}=\mathbf{HI}$, but where

and $\mathbf{H}$ is a full matrix with scalar values at each point. This approach would open up methods for color-, hyperspectral-, and polarization-based imaging, with flexibility for less constrained mathematical approaches to exploit the common information between the different channels [5,21,22]. A further advantage of using smaller scale holograms is the relaxation of the transverse coherence requirement.Second, we must overcome the resolution limitation by combining this approach with a dynamic element, making each of the sub-holograms active. This change would mean we no longer have a synchronous measurement technique, but would benefit from a speed-up of $M$ relative to the single-pixel approach, and with higher flexibility. Without achieving the advantage of Eq. (12), any SLM technology could be used to simply extend the experimental technique demonstrated here. However, a more exotic solution would be required to implement Eq. (12) dynamically. For example, a fixed hologram could be combined with some kind of active element and then combined with the mirrors on a DMD or modulated with LCs [23]. One can imagine a hybrid few-shot CI system using such approaches.

## 5. CONCLUSION

We have presented an implementation of the standard compressive imaging linear algebra, but with a synchronous rather than time-sequential measurement approach. To do this, we used a hologram at the image plane, where each image pixel was a sub-hologram able to direct light in known ratios toward an array of sensors. We have discussed the design requirements of the hologram, and experimentally demonstrated the required linear algebra.

Standing alone, this method suffers from shortcomings in terms of the achievable resolution. However, if combined with a dynamic holographic element we believe this technique is well positioned to capitalize on the extensive progress being made into the field of flat optics and optical metasurfaces. It could, in fact, provide a platform for high-speed flexible hyperspectral and polarization compressive sensing, and open up new mathematical approaches to image recovery.

*Data access:* Additional data related to this publication is available at the University of Cambridge research repository [24].

## Funding

Engineering and Physical Sciences Research Council (EPSRC) (EP/G037256/1, EP/L015455/1).

## Acknowledgment

This work was conducted under a Centre for Doctoral Training in Integrated Photonic and Electronic Systems (CDT-IPES). The authors would like to thank James Dolan, George Gordon, Ammar Khan, and Calum Williams for fruitful discussions.

## REFERENCES

**1. **D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE **6065**, 606509 (2006).

**2. **R. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. **24**(4), 118–121 (2007). [CrossRef]

**3. **M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**(2), 83–91 (2008). [CrossRef]

**4. **S. Welsh, M. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express **21**, 23068–23074 (2013). [CrossRef]

**5. **P. Nagesh and B. Li, “Compressive imaging of color images,” in *IEEE International Conference on Acoustics, Speech and Signal Processing* (IEEE, 2009), pp. 1261–1264.

**6. **M. A. Neifeld and J. Ke, “Optical architectures for compressive imaging,” Appl. Opt. **46**, 5293–5303 (2007). [CrossRef]

**7. **T. Björklund and E. Magli, “A parallel compressive imaging architecture for one-shot acquisition,” arXiv preprint arXiv:1311.0646 (2013).

**8. **A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” J. Disp. Technol. **3**, 315–320 (2007). [CrossRef]

**9. **A. Stern, Y. Rivenson, and B. Javidi, “Single-shot compressive imaging,” Proc. SPIE **6778**, 67780J (2008).

**10. **M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express **15**, 14013–14027 (2007). [CrossRef]

**11. **C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics **8**, 605–609 (2014). [CrossRef]

**12. **A. Macfaden, S. J. Kindness, and T. D. Wilkinson, “An implementation of one-shot compressive imaging using a diffractive optical element,” in *Frontiers in Optics* (Optical Society of America, 2015), paper FTh1F.2.

**13. **J. Goodman, *Introduction to Fourier Optics* (McGraw-Hill, 2008).

**14. **H. Ohlsson, A. Yang, R. Dong, and S. Sastry, “Compressive phase retrieval from squared output measurements via semidefinite programming,” arXiv preprint arXiv:1111.6323 (2011), pp. 1–27.

**15. **D. A. Lorenz, “Constructing test instances for basis pursuit denoising,” IEEE Trans. Signal Process. **61**, 1210–1214 (2013). [CrossRef]

**16. **D. A. Lorenz, L1TestPack: a software to generate test instances for l^{1} minimization problems (2011).

**17. **N. Yu and F. Capasso, “Flat optics with designer metasurfaces,” Nat. Mater. **13**, 139–150 (2014). [CrossRef]

**18. **J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science **339**, 310–313 (2013). [CrossRef]

**19. **M. Ozaki, J. Kato, and S. Kawata, “Surface-plasmon holography with white-light illumination,” Science **332**, 218–220 (2011).

**20. **W. T. Chen, K.-Y. Yang, C.-M. Wang, Y.-W. Huang, G. Sun, C. Y. Liao, W.-l. Hsu, H. T. Lin, S. Sun, L. Zhou, A. Q. Liu, and D. P. Tsai, “High-efficiency broadband meta-hologram with polarization controlled dual images,” Nano Lett. **14**, 225–230 (2014). [CrossRef]

**21. **Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. **52**, D46–D54 (2013). [CrossRef]

**22. **A. Majumdar and R. K. Ward, “Compressed sensing of color images,” Signal Process. **90**, 3122–3127 (2010). [CrossRef]

**23. **C. Williams, Y. Montelongo, J. O. Tenorio-Pearl, A. Cabrero-Vilatela, S. Hofmann, W. I. Milne, and T. D. Wilkinson, “Engineered pixels using active plasmonic holograms with liquid crystals,” Phys. Status Solidi **9**, 125–129 (2015).