Single-pixel detectors can be used as imaging devices by making use of structured illumination. These systems work by correlating a changing incident light field with signals measured on a photodiode to derive an image of an object. In this work we demonstrate a system that utilizes a digital light projector to illuminate a scene with approximately 1300 different light patterns every second and correlate these with the back scattered light measured by three spectrally-filtered single-pixel photodetectors to produce a full-color high-quality image in a few seconds of data acquisition. We utilize a differential light projection method to self normalize the measured signals, improving the reconstruction quality whilst making the system robust to external sources of noise. This technique can readily be extended for imaging applications at non-visible wavebands.
© 2013 Optical Society of America
Over the past two decades there has been considerable research in ghost imaging and computational imaging with single-pixel detectors. Although both research areas have different origins, there are instances in which both are closely related. Initially, ghost imaging relied on the use of two correlated light fields and two photodetectors to produce an image; one detector with no spatial resolution used to collect one light field which had previously interacted with an object, and a second detector with high spatial resolution used to collect the other correlated light which never interacts with the object. Neither detector alone is capable of imaging the object, however combining the measurements made by both detectors can be used to produce an image.
Early demonstrations of ghost imaging utilized the entanglement arising from spontaneous parametric down-conversion to produce the correlated light fields, an approach often referred to as quantum ghost imaging [1, 2]. The presence of entanglement led many to interpret this as fundamentally quantum behavior. However, there have since been demonstrations using a pseudothermal light source [3–7], whereby laser light is propagated through a ground glass diffuser to produce a speckle field, after which a beam splitter makes a correlated copy of the field, a technique commonly termed classical ghost imaging. For both classical and quantum ghost imaging approaches, the field-of-view, spatial resolution, contrast and signal-to-noise ratio of an image can be described by semi-classical photodetection theory .
Subsequently, the need for a beam splitter was removed by using a spatial light modulator capable of generating programmable light field to illuminate the object. Controlling both the intensity and phase, allows the intensity structure to be calculated at any plane and stored in computer memory rather than measured on a detector with high spatial resolution. This simplified experimental approach is known as computational ghost imaging  and can be performed on systems with and without lenses between the source and object. When an imaging system is employed, any phase information becomes redundant, since only the intensity correlations of the light reflected or transmitted from an object are used to produce an image. Within the signal processing community this latter approach is referred to as a single-pixel camera employing structured illumination .
Interestingly, for imaging systems based on structured illumination, the position of the single-pixel detector has been shown to determine the apparent illumination direction for the scene . Moreover, by employing multiple single-pixel devices in different positions, variations in the image shading profiles allow for accurate three-dimensional images to be retrieved .
The use of single-pixel detectors as imaging devices has been studied extensively, in particular within the field of compressed sensing, which can be used to exploit the sparseness in natural images to significantly reduce the amount of information that is needed to reproduce it. Indeed, it is this feature of natural images that is at the heart of well-known lossy image compression algorithms, such as JPEG .
In this work we present a computational imaging system that can produce full-color 2D images of 3D scenes containing multiple objects within a few seconds. Our system makes use of structured illumination and three single-pixel detectors to simultaneously obtain the red, green and blue color planes. We compare color images that are reconstructed when an iterative algorithm and a compressed sensing technique is applied. The latter approach is shown to yield high-quality full-color images from sub-Nyquist number of measurements.
2. Experimental setup
The setup used in this experiment is shown in Fig. 1. In contrast to classical ghost imaging, this approach relies on using a digital light projector (DLP) to provide spatially incoherent binary structured illumination, which is imaged to the plane of the scene with a 55mm lens. The DLP contains a digital micro-mirror device (DMD) and three colored (red, green and blue) light emitting diodes (LEDs). For each incident pattern the total intensity reflected from the scene is directed onto a composite dichroic beamsplitter (X-Cube) using a large collection lens. The dichroic beam splitter is used to spectrally filter red, green and blue light towards different outputs and allow subsequent measurement on three unfiltered single-pixel photodetectors, as shown in the callout of Fig. 1. The computer records all three photodetector signals via an analogue-to-digital converter, which are then be used for image reconstruction using an appropriate algorithm.
In order to obtain images from the system quickly, it is necessary to rapidly illuminate a scene with different structured light fields. The DMD contained in the light projector has a maximum array switching rate of 1440Hz, determined by the 60 Hz frame rate and 24 bit-planes (used for color depth) of the DLP. As we require only binary illumination patterns, we operate the projector with all three LEDs permanently on and can take advantage of the color planes to display 24 different binary images per frame.
One main challenge when displaying at this rate is overcoming the various latencies involved in the computer graphics pipeline. This is addressed by adding in several synchronization processes during data acquisition. One such process introduces a ‘flash’ bit-plane in the sequence of images which make up one whole frame. This is achieved by turning all mirrors to the on position and then all mirrors to the off position. The remaining 22 bit-planes are then used to display random patterns for the reconstruction process. Additionally, every 50th full frame is used as a synchronization frame, meaning that there are 24 alternating black and white bit-planes. This allows robust correlation of the frames to the measured signals and ensures that only data which is found between two matching synchronisation frames is actually used in the reconstruction algorithm or saved for post processing.
An important part of keeping the system synchronized is ensuring that a new frame is displayed every time the projector refreshes. This is achieved through the use of an OpenGL program  that only changes the image when the projector refreshes. The OpenGL program is called from a dedicated loop in the LabVIEW control program, such that it always runs at 60Hz. If pattern generation or acquisition runs at a slower rate than this, patterns are displayed twice, and the acquisition code knows to ignore repeated patterns. Similarly, data acquisition is run from a dedicated loop to allow it to be continuously acquired; separate loops split this data into frame-sized chunks and match it up with the patterns displayed on the DMD. LabVIEW’s queue structures are used to pass data asynchronously between loops, and LabVIEW’s native parallelism allows separate loops to run on different processor cores for increased speed. Within each frame, the 22 bit-planes that are not used for synchronization are split into 11 pairs. Each pair consists of a pattern and its inverse. This allows us to make differential measurements, analogous to lock-in detection at 720Hz. Differential detection significantly reduces the influence of noise such as ambient light fluctuations on the measurement. Overall, the system is able to display and measure approximately 650 unique patterns per second, once differential measurement and synchronisation are taken into account.
3. Image reconstruction
When performing imaging with single-pixel detectors, there are two main types of reconstruction algorithms that can be used for processing the acquired data. Iterative algorithms make a refined estimate of the scene after each new measurement, while inversion algorithms utilize an entire data set in a bulk process to find the best solution for a set of unknowns. For both types of algorithms, the resolution of the final reconstruction is N = x × y, where x and y is the number of pixels in x and y dimensions used for illumination. For each iteration, i, a unique 2D intensity pattern Ii(x, y) is projected onto the object and the corresponding reflected intensity (voltage signal), at spectral frequency μ, is measured for each single-pixel photodetector, Sμi, thus we can write
3.1. Iterative image reconstruction
For iterative image reconstructions it has been shown that normalization of the photodetector signals leads to an improvement in the overall image reconstruction [15, 16]. In this experiment we perform a normalization to signals by maintaining an equal black/white ratio in each illuminating pattern and acquire the differential signal between consecutive positive/negative patterns. The iterative algorithm we employ in our system is known as a traditional ghost imaging algorithm, defined byEq. 2 can be re-written as Fig. 2 for approximately 1 million measurements. The full-color image is obtained by combining the final three images reconstructed from each detector, corresponding to the red, green and blue color channels.
3.2. Inversion image reconstruction
For inversion algorithms we can reshape each 2D pattern, as a 1D array, Ii, and produce a measurement matrix, I, containing all projected patterns, such that
3.3. Compressive sensing
To reduce the number of measurements required for faithful image reconstruction we can instead employ compressive sensing techniques [17, 18] for which a variety of different approaches exist. In this experiment we make use of the ℓ1 magic toolbox for the Matlab programming language (which is available at www.l1-magic.org). To employ this technique the system must be represented in an appropriate sparse basis, therefore we perform a 1D discrete cosine transform (DCT) on each reshaped pattern Ii such that I ⇒ IDCT. The ill-conditioned problem can then be expressed as19] and was assigned a value of 0.01 in this experiment for optimum performance. Performing an inverse DCT on O*μ and with appropriate reshaping results in a solution for the scene, Oμ(x, y).
By utilising these compressed sensing techniques the reconstructed image quality can be maintained, even with a highly ill-conditioned system of equations by taking advantage of the sparsity in natural images. In Fig. 3 we compare the iterative algorithm and compressed sensing technique for reconstructing a colored scene with resolution of 128 × 64 pixels, for an increasing number of iterations. When using compressed sensing techniques a high quality image can be observed for 6000 iterations (less than 10 seconds of acquisition), which corresponds to approximately 75% of the Nyquist limit. Compared with the iterative algorithm for the equivalent number of iterations we observe both increased image quality and increased contrast as expected.
In conclusion we have shown that computational imaging with three single-pixel detectors can be used to produce full-color images of large scenes in just a few seconds of data acquisition. The use of a digital light projector and a suitable computer algorithm allows rapid structured illumination and hence short acquisition times, showing promise for a range of alternative imaging applications. Additionally by employing compressive sensing techniques we have shown significant improvement to the image quality when the number of measurements is below the Nyquist limit.
The low cost of additional photodetectors and the large operating bandwidth afforded by DMD technology opens a range of alternative imaging solutions, such as hyperspectral imaging and particularly for imaging at wavelengths where CCD or CMOS imaging technology is limited. In addition, the use of single-pixel photodetectors may offer low data requirements for sending images, provided the patterns used to generate them are known by both parties. We have demonstrated a pattern projection rate of approximately 650Hz, however in principle DMD technology can display in excess of 20kHz, which could reduce acquisition times further.
M. J. P. would like to thank the Royal Society, the Wolfson Foundation and DARPA. We gratefully acknowledge financial support from the EPSRC (Grant EP/I012451/1).
References and links
3. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““Two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89, 113601 (2002). [CrossRef]
5. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Correlated imaging, quantum and classical,” Phys. Rev. A 70, 013802 (2004). [CrossRef]
7. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94, 183602 (2005).
8. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11, 949–993 (2012). [CrossRef]
9. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]
10. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Proc. Mag. 25, 83–91 (2008). [CrossRef]
11. P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Trans. Graph. 24, 745–755 (2005). [CrossRef]
13. G. K. Wallace, “The jpeg still picture compression standard,” Commun. ACM 34, 30–44 (1991). [CrossRef]
14. D. Preece, R. Bowman, A. Linnenberger, G. Gibson, S. Serati, and M. Padgett, “Increasing trap stiffness with position clamping in holographic optical tweezers,” Opt. Express 17, 22718–22725 (2009). [CrossRef]
16. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20, 16892–16901 (2012). [CrossRef]
17. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett 95, 131110 (2009). [CrossRef]
18. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). [CrossRef]
19. K. Koh, S.-J. Kim, and S. P. Boyd, “An interior-point method for large-scale l1-regularized logistic regression.” J. Mach. Learn. Res. 8, 1519–1555 (2007).