Abstract

Ghost imaging captures 2D images with a point detector instead of an array sensor. It could therefore solve the challenge of building cameras in wave bands where sensors are difficult and expensive to produce and could open up more routine THz, near-infrared, lifetime, and hyperspectral imaging simply by using single-pixel detectors. Traditionally, ghost imaging retrieves the image of an object offline by correlating measured light intensities with pre-designed illuminating patterns. Here we present a “self-evolving” ghost imaging (SEGI) strategy for imaging objects bypassing offline post-processing. It also offers the capability to image objects in turbid media. By inspecting the optical feedback, we evaluate the illumination patterns by a cost function and generate offspring illumination patterns that mimic the object’s image, bypassing the reconstruction process. At the initial evolving state, the object’s “genetic information” is stored in the patterns. At the following imaging stage, the object’s image (${48} \times {48}\;{\rm pixels}$) can be updated at a 40 Hz imaging rate. We numerically and experimentally demonstrate this concept for static and moving objects. The frame-memory effect between the self-evolving illumination patterns provided by the genetic algorithm enables SEGI imaging through turbid media. We further demonstrate this capability by imaging an object placed in a container filled with water and sand. SEGI shows robust and superior imaging power compared with traditional computational ghost imaging. This strategy could enhance ghost imaging in applications such as remote sensing, imaging through scattering media, and low-irradiative biological imaging.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Computational ghost imaging (CGI) allows image formation with a single-pixel detector that has no spatial resolution by applying time-varying illumination patterns to an object of interest [1,2]. The predetermined optical patterns are typically generated by a scattering medium or a spatial light modulator and projected onto the object. The single-element photodetector records the light intensity fluctuations of illumination patterns after interacting with an object. Then the image can be recovered from analysis of the correlations between the patterns and measured intensities. Closely related to CGI is a technique called single-pixel imaging, in which the image of the object is modulated by a spatial light modulator and then detected by a single-pixel detector [3]. Such a single-pixel detection strategy has the potential to enable low-cost imaging in the x-ray [46], infrared [7], and terahertz wave bands [810], where detector arrays are often expensive and may have low sensitivity. The technique also has the ability to enhance bioimaging deep inside scattering media, since the total fluorescence light detection scheme enables higher tolerance for signal scrambling through biological tissues, compared with conventional 2D pixel-array detection [11]. The sparse illumination patterns used can eliminate the ‘‘out-of-focus” light, seen in optical-sectioning microscopy [12]. Single-pixel detection can also enable hyperspectral and time-resolved imaging [1315]. Taking advantage of the object image’s inherent sparsity, CGI can recover an image with fewer measurements than required by raster scanning. Thus, it provides benefits such as improved frame rates with lower sampling ratios in applications such as multiphoton microscopy [16], random-access microscopy [17], multimode fiber endoscopy [18], and cytometry [19].

Post-processing algorithms of CGI and single-pixel cameras have been proposed to enhance the reconstructed image. Compressed sensing provides high image quality at the cost of recovery time [3] for applications that permit offline reconstruction. Other algorithms allow for differential ghost imaging (DGI) [20], iterative ghost imaging [21], pseudo-inverse ghost imaging (PGI) [2224], or Gerchberg–Saxton-like [25] data treatments, serving different functional needs. Various spatiotemporal illuminating patterns, for instance, the random, Hadamard, wavelet, Fourier, and discrete cosine bases, have also been proposed to either denoise the image by sampling on its inherent sparsity or to enable a computationally fast algorithm [7,2628]. However, these methods calculate the reconstruction image after at least one sampling cycle or, in most cases, do it offline, limiting applications of ghost imaging (GI) that need instant imaging.

Recently, there have been international efforts to boost the speed of CGI/single-pixel imaging to enable fast imaging. Generally, more measurements in one sampling cycle usually lead to higher quality reconstruction but lower imaging speed and longer reconstruction times. Benefiting from advances in machine learning, deep-learning-assisted CGI can substantially improve image reconstruction quality with limited sampling ratios [2932]. Deep neural networks, after training using simulated or experimental data, can improve the retrieved image quality, especially at a low sampling rate for a high frame rate, or in a scattering condition [33], for 2D imaging and 3D depth mapping [34]. However, the nature of offline reconstruction methods intrinsically limits CGI’s imaging ability in instantaneous imaging, where the object is moving. Moreover, a dynamic change in an object or its surrounding environment, between each of the illumination of patterns within one sampling cycle, will induce artifacts and noise.

Here we present a feedback-based GI called self-evolving ghost imaging (SEGI), which can update the image of an object (${48} \times {48}\;{\rm pixels}$) at 40 Hz, using 20 evolved illuminating patterns. By highlighting the inherent reciprocity between the total light intensity signal from an unknown object and illumination patterns, we can create evolved patterns that converge towards the image of an object as the iterated sampling loops. We introduce a genetic algorithm that has been used in wavefront optimization to focus light through scattering media [35,36] to adaptively optimize the patterns in every generation of SEGI adaptively. Our technique enables instant imaging without the postponed computation required by other GI schemes. Additionally, we adapt the image denoising method, block-matching and collaborative filtering (BM3D), to numerically denoise the raw images to enhance image quality [37]. Finally, we numerically and experimentally demonstrate the imaging of an object inside a dynamic turbid environment by SEGI. Compared with five traditional GI methods, SEGI shows a more robust imaging power and superior image quality.

2. RESULTS

In typical CGI, the illumination patterns ${I_i}({x,y})$ are designed prior to an experiment taking place and projected by a spatial light modulator, such as a digital micromirror device (DMD). Then the signal light intensities ${S_i}$ can be recorded by a bucket detector such as a photodiode or photomultiplier tube, or by pixel binning in a charge-coupled device, acting as a single-pixel detector, with

$${{S}_{i}} = \int {{I}_{i}}({{x},{y}} ) \cdot {O}({{x},{y}} ){\rm d}x{{\rm d}y},$$
where $O({x,y})$ is the geometric function of the object. A traditional correlation strategy [1] can retrieve the ghost image $R({x,y})$ after a sequence of measurements by
$${R}({{x},{y}} ) = \langle{SI}({{x},{y}} )\rangle - \langle{S}\rangle \langle{I}({{x},{y}} )\rangle,$$
where $\langle \cdots \rangle$ is the ensemble average over the distribution of patterns.

In SEGI, the illuminating patterns are dynamically updated towards the shape of the target, compared with the predetermined ones in conventional CGI, as shown in Fig. 1. A genetic algorithm that uses principles inspired in nature to “evolve” toward the best solution is used to iteratively optimize the patterns through operations of breeding and mutation according to measured intensities and parent patterns [35,36], as depicted in Fig. 1(a). Initially, a population of $N$ parent patterns $(I_i^1({x,y}),i = 1,2, \cdots ,N)$ is randomly generated for projection onto the object. Then, they are ranked according to the cost function (CF) normalized by the initial parents as defined below:

$${\rm CF}({{i},{g}} ) = \frac{{{{({{S}_{i}^{g}} )}^{k}} \cdot \left\langle\mathop \sum \nolimits_{{p} = 1}^{{Px}} {I}_{i}^1({{x}({p} ),{y}({p} )} )\right\rangle}}{{\langle ({{S}_{i}^1} )^{k}\rangle \cdot \mathop \sum \nolimits_{{p} = 1}^{{Px}} {I}_{i}^{g}({{x}({p} ),{y}({p} )} )}},$$
where $\langle {({S_i^1})^k}\rangle$ is the measured total light intensity corresponding to the initial patterns, $\mathop \sum \nolimits_{{p} = 1}^{{Px}} {I}_{i}^{\textbf{g}}({{x}({p}),{y}({p})})$ is the pattern weight determined by summing the pixel values, and $S_i^g$ is the corresponding light intensity in $g$ generation ($g \in [{1, \ldots ,G}]$; $G$ is the final generation number). $k$ is a weight coefficient for ${S}_i^g$, $p$ is the pixel index of the points on each pattern, and $Px$ is the total pixel number of each pattern [e.g., ${Px} = {64} \times {64}$ for Fig. 1(c)]. Each offspring pattern is created from two randomly chosen parent patterns, $pa$ and $ma$, from a random binary template $T$: ${\rm Offspring} = {ma} \cdot T + {pa} \cdot ({1 - T})$, according to the CF, with higher CF values indicating higher rankings and a correspondingly higher probability to be chosen. Similar to the natural evolution principle, the offspring pattern will mutate at partial pixels, with a decreasing mutation rate along with the generation number. $M$ mutated offspring patterns are generated by repeating the process (from parent selecting to mutation) $M$ times; here $M = N/2$. These offspring patterns will replace the $M$ lowest-ranked patterns in the last generation $N$ patterns to create a new generation of $N$ patterns that self-evolve towards the object image.
 figure: Fig. 1.

Fig. 1. Working principle of SEGI with a genetic algorithm. (a) Schematic diagram of SEGI. A population of $N$ random parent patterns is generated to illuminate the object. After the cost function measurement, these patterns are ranked for breeding $N/{2}$ offspring patterns. Each new pattern is created by combining the ma and pa patterns with a random breeding template, followed by a mutation operation of the pixel values. The mutated offspring patterns replace the lowest-ranked $M\;(N/{2})$ parent generation patterns. The steps are repeated in every generation. Finally, the illumination patterns evolve towards an image of the object (bottom left), as the cost function values continue to increase (bottom right, derived from the evolving patterns in SEGI). (b) Schematic of traditional computational ghost imaging with predetermined patterns and measured single-pixel signals (fluctuating intensity distribution of ghost imaging with random patterns). (c) Example of SEGI results from increased numbers of generation with a population of 30. $G$ is the generation number.

Download Full Size | PPT Slide | PDF

Figure 1(c) illustrates SEGI with letters “GI” as the object in a ${64} \times {64 -}\;{\rm pixel}$ image. With $N = 30$, a rough outline is shown by the 100th generation and continuously improves with $G$. Compared with the fluctuating intensity distribution [Fig. 1(b)] seen in conventional CGI, the enhancement of CF, during the optimization of SEGI, sustainably increases and is quadrupled by the 10,000th generation with a high-quality image [Fig. 1(a)]. Note that a useful generation number is not necessarily as large as 10,000, especially for dynamic imaging where the structural continuity between the multiple frames can serve as a priori knowledge to assist pattern updates.

Figure 2 overviews our simulations of SEGI for static and dynamic binary objects. Figure 2(b) shows the raw and filtered SEGI image of a static binary object [Fig. 2(a)] with $N = {30}$ and $G = {1000}$. By applying a median filter with a $3 \times 3$ kernel to the retrieved image, we can eliminate the salt-and-pepper noise, preserve the object’s edges, and improve its peak signal-to-noise ratio (PSNR) from 6.95 dB to 14.08 dB. To explore the contribution of $N$ and $G$ in SEGI, we plot the PSNR of the raw SEGI images for increasing $N$ and $G$ in Fig. 2(c). The result shows higher values in the upper-right part, which indicates $G$ has a higher impact than $N$ with the same $N \times G$ number in the evolution process.

 figure: Fig. 2.

Fig. 2. Numerical results of SEGI for a binary object with $k = 1$. (a) Original object. (b) SEGI raw (top) and median filtered results (bottom) after ${G = {1000}}$ in a population of 30. (c) PSNR of SEGI raw results with (top) population number range from 10 to 50 and (bottom) generation number range from 10 to 10,000. (d) Example of object frames (52nd and 112th) in a 112-frame image series and corresponding SEGI raw and median filtered results (e) with $\Delta G = 100$ and $N = {30}$. (f) PSNR and CF enhancement values of the dynamic SEGI. A video of the moving object is shown in Visualization 1.

Download Full Size | PPT Slide | PDF

Figure 2(e) and Visualization 1 show the SEGI image of an object that is dynamically changing through translation and rotation [Fig. 2(d)]. The movement has been split into 112 individual frames to be imaged. For the simulation of dynamic objects, we use $\Delta G$ to define the used generation number for each frame of the dynamic object image series. Strikingly, with a limited sampling number of $\Delta G = 100$ and $N = 30$, SEGI can still create a recognisable image of the translated object in frame 52; the image quality (PSNR 6.37 for the unfiltered image) is comparable with that for imaging a static object under $G = 1000$ and $N = 30$. This enhancement comes from the continuity of object structures along the time dimension, as the output image from the previous frame can serve as a priori knowledge to guide image evolution in the next frame. As a result, the image quality increases with the tracking time/frame, and the required sampling ratio is greatly reduced. The PSNR reaches as high as 8.24 dB (raw) at frame 112 with a sampling ratio of  ${\sim}{36.6}\%$ (see Supplement 1 Note 2 for details).

Figure 2(f) shows the PSNR enhancement during the entire movement. Under small translations (frames 1–40), the latter frame will inherit image information from the former, which allows SEGI to enhance the PSNR from 3.16 dB to about 7 dB. The CF value, correlating with the image quality, shows the same trend with PSNR (raw), rising from one to three. When the displacement is larger (frames 41–68), the newly generated images for each frame inherit less information from previous frames. In this case, the PSNR shows a slightly decreasing trend, even though it increased during the 100 generations for each frame [inset, Fig. 2(f)]. With a small rotational displacement (frames 69–112), the PSNR increases with the same trend as in small translations. Note that the filtering process can substantially enhance the PSNR for the rotation object.

Our imaging process can be organized into an “initial evolving state” and “imaging state,” as shown in Fig. S18 of Supplement 1. The initial state allows object information to be accumulated, and then imaging will start to operate close to normal imaging but without post-reconstruction. The initial evolving state usually takes more generations ($G$) than the imaging state to produce an initial image, e.g.,  $G = {1000}$ for ${64} \times {64}\;{\rm pixels}$ [Figs. 2(b) and 2(f)], with more generations providing better image quality. Subsequent imaging states rapidly update all changes in the image for each generation, maintaining the image quality. Larger displacements of the object require more updating generations to image, e.g., $\Delta G = {100}$ is required if the object [Fig. 2(f)] has a speed of 1 pixel/frame. The initial evolving state needs to be conducted only once for each object. The imaging sampling rate depends on $\Delta G$.

We further investigate the role of the weight coefficient $k$ in image quality. Figure 3(a) shows the SEGI images of a static object with $k = {2}$, 3, and 4, where increasing $k$ results in a faster grain-filling rate shown than that in Fig. 2(b) with the same $N$ and $G,$ while the noisy pixels also increase significantly. This phenomenon is more obvious for higher generations (see Supplement 1, Fig. S1). The reason is that an increased $k$ increasingly highlights the contribution of “1” pixels located inside the object. The SEGI images of a dynamic object [Fig. 3(b)] shows the same phenomenon, and the filtered image [Fig. 3(c)] indicates the image dilation tendency appears to decrease image quality. Figure 3(d) compares the PSNR for $k = {1}$, 2, 3, and 4 for both static and dynamic object (see Supplement 1, Fig. S2 for the filtered results). When the generations are less than 3800, $k = 1$ CF yields better image quality due to its faster growth rate than the other three for static images, and its PSNR value is generally higher than that for $k = 2, 3, 4$. When the generation number is beyond 3800, $k = 2$ becomes the best option for the static object. In terms of dynamic imaging, the superiority of $k = 1$ under small generation numbers gives much higher image quality.

 figure: Fig. 3.

Fig. 3. Characteristic of different $k$ values in the designed cost function. (a) SEGI raw results for static imaging after 1000 generations when $k = 2, 3, 4$. (b) SEGI raw results of 112th frame for dynamic imaging with different $k$ values ($\Delta G = 100$) and corresponding median filtered images (c). The movement is the same as that in Fig. 2. (d) PSNR of SEGI results for static and dynamic imaging in the evolution processes. The population number $N$ is 30 in (a)–(d). The movement of the object is the same as that in Fig. 2.

Download Full Size | PPT Slide | PDF

We further employ SEGI to capture ${64} \times {64 -}\;{\rm pixel}$, 8-bit grayscale images, shown in Fig. 4. To balance the numerator and denominator and involve the comparable contribution of detected light intensities and pattern weights, we modify the CF for grayscale imaging as

$${\rm CF}({{i},{g}} ) = \frac{{{{({{S}_{i}^{g}} )}^2} \cdot \left\langle \mathop \sum \nolimits_{{p} = 1}^{{Px}} {({I}_{i}^1({{x}({p} ),{y}({p} )} ))}^2\right\rangle}}{\langle ({{S}_{i}^1} )^2\rangle \cdot \mathop \sum \nolimits_{{p} = 1}^{{Px}} {{({I}_{i}^{g}({{x}({p} ),{y}({p} )} ))}^2}}.$$

Here we choose $k = 2$, as other $k$ values overweight contributions of either large or small pixel values, leading to binary-like image results. Figure 4(a) shows our grayscale object made of three blocks with different shapes above a checkered background; the color indicates the grayscale value. We use the BM3D algorithm for noise reduction [37] and the structural similarity index measure (SSIM, higher is better) to quantitatively analyze the SEGI images. SSIM can perceive changes in structural information between the objects, which is more sensitive to human perception than the change of pixels’ absolute values. Two filtered SEGI results with $G = \;200$, $N = \;40$, and $G = 300$, $N = 100$ are shown in Fig. 4(a) middle and right, with sampling ratios of 99% and 369%, respectively. The standard deviation of the filter noise is set to 60 in the filter. A detailed study of the SEGI raw and filtered images is shown in Supplement 1 Section 2, Figs. S4 and S5.

 figure: Fig. 4.

Fig. 4. Numerical results of SEGI for a grayscale object. (a) Original object image with coded pseudocolor (left) and filtered SEGI results with $G = \;200$, $N = \;40$ (middle), and $G = 300$, $N = 100$ (right). (b), (c) 162nd and 256th frames selected in a 256-frame object image series (left) and corresponding filtered SEGI results with $\Delta G = \;100$, $N = \;30$ (middle) and $\Delta G = 200,\;N = 100$ (right). (d) SSIM values of raw and filtered SEGI results for dynamic imaging (see Visualization 2 and Visualization 3 for the first two cases).

Download Full Size | PPT Slide | PDF

For dynamic imaging, we extend the object image to a 256-frame image series by translating and rotating the elements and background (see Supplement 1, Fig. S6, Visualization 2, and Visualization 3 for details). The 162nd and 256th original images are shown in the left of Figs. 4(b) and 4(c). The middle images are generated under $\Delta G = \;100$ with $N = \;30$. The right images are generated under $\Delta G = \;200$ with $N = \;100$. Thus, the sampling ratios are 36.6% and 244 %, respectively. The rough structures in the object image are retrieved in the middle SEGI images with sampling ratios of 36.6%, while for the static case, it needs a sampling ratio of about 369%. The SSIM values of dynamic SEGI imaging under $\Delta G = 100$, $N = 30$; $\Delta G = 100$, $N = 50$; $\Delta G = 100$, $N = 100$; and $\Delta G = 200$, $N = 100$ are shown in Fig. 4(d). The image quality could be further improved by increasing $\Delta G$ or $N$ but at the cost of a longer sampling time. More details are provided in Supplement 1, Figs. S7–S12. Visualization 4 further shows the ability of SEGI with unidirectionally moving objects.

A comparison between SEGI and traditional GI methods—GI, DGI, PGI, compressive sensing with the sparse representation prior (CS-sparse), and the total variation regularization (CS-TV) method [38]—is shown in Fig. 5. Here, we use ${{\rm SEGI}_{{ini}}}$ to represent the first frame of the image of “initial evolving” (${\rm frame} = {1}$, $\Delta G = {100}$), and SEGI to represent the image at “imaging state” (${\rm frame} = {183}$, $\Delta G = {100}$). For a ${64} \times {64}\;{\rm pixel}$ object image with $N = {30}$ and $\Delta G = {100}$, 25–50 frames are required for the initial evolving state. The turning point on the imaging quality curve [such as Fig. 4(d)] represents the start of the imaging state. Note that the start frame number of the imaging state varies with different pixel numbers, population, and $\Delta G$. One could redefine the start of the imaging state according to the requirement of imaging quality, as the images are continuously observed and updated. Here the object remains static during each imaging frame. Traditional GI methods show better image quality than ${{\rm SEGI}_{{ini}}}$, as ${{\rm SEGI}_{{ini}}}$ has yet to benefit from the evolving process. The “imaging state” starts from the 50th frame as shown in Visualization 2 and Visualization 3. SEGI shows image quality comparable to the CS method. For all methods, a higher sampling ratio [Fig. 5(b)] produces better image quality than a lower sampling ratio [Fig. 5(a)]. Compared with traditional GI methods, the disadvantage of SEGI is the necessary initial evolving time [Fig. 4(d)], which is similar to the training time for machine learning. However, this initial evolving time does not affect the imaging rate (Supplement 1, Fig. S18). The inherited genetic information from the initial evolving state provides advantages in imaging objects through dynamic obstructions, as the rapidly changing scattering noise cannot be inherited, as investigated below.

 figure: Fig. 5.

Fig. 5. Numerical comparison between imaging results of grayscale objects with sampling ratio of (a) 36.6% with $N = {30}$ and (b) 61% with $N = {50}$. The imaging method includes GI, DGI, PGI, CS-sparse, CS-TV, and SEGI. ${\rm SEGI}_{\rm ini}$ is the first imaging frame in the “initial evolving” state, and the SEGI is the 183th image in the “imaging state.”

Download Full Size | PPT Slide | PDF

We further investigate the imaging ability of SEGI when dealing with an object in which dynamic obstructions are present. Figure 6 depicts the numerical simulation of imaging with dynamic obstructions by SEGI and traditional GI methods. The object is a Cameraman image (${64} \times {64}\;{\rm pixel}$), shown in Fig. 6(a), with intensities ranging from zero to 150, Fig. 6(b). This object is disturbed by randomly distributed particles that are ${4} \times {4}\;{\rm pixels}$ in size and have an intensity range of ${150}\sim{255}$ and update frequency of 122 Hz. An additional noise (about 0.1%) has been added to mimic real experimental situations. Figure 6(b) shows two typical disturbed objects at different times.

 figure: Fig. 6.

Fig. 6. Comparison of SEGI and CGI in dynamic particle disturbances condition. (a) The Cameraman image is used as the original object. (b) Example frames with dynamic particle disturbance. During the measurement time of both methods, the target is scattered by randomly distributed particles with a frequency of 122 Hz and ${\sim}{0.1}\%$ additional detection noise. The illumination patterns have a frequency of 10 kHz. The SSIM indices, compared with the original target, of SEGI-r (raw) and SEGI-f (BM3D filtered) are shown on the left of (c) and (d). The first, 6000th, and 1000th SEGI images are shown on the right. Here $M = {20}$ and $N = {40}$ in SEGI. (e)–(g) Results acquired from DGI, PGI, and CS-TV methods. “Mode 1” (orange line) recovers the image from the measurements in each dynamic time unit. The images recovered from the first and 6000th images (with a sampling ratio of 2%) are shown in the middle. “Mode 2” (green line) combines all former and current measurements and corresponding illumination patterns to recover the image. The 1000th image (with a sampling ratio of 2000%) and corresponding computation time are shown on the right. The dynamic time unit means that disturbing particles maintain a static state in this period.

Download Full Size | PPT Slide | PDF

We first compare the methods of DGI, PGI, and CS-TV with SEGI in “operation mode 1” [orange lines in Figs. 6(e)–6(g)] where the images are acquired within each dynamic time unit (the time period when the particles remain steady). The sampling ratio for mode 1 is a constant value that has a time consumption (pattern number $\times$ single pattern illumination time) equal to the dynamic time unit. The mode 1 sampling ratio is 2% (82 patterns) to match the 122 Hz noise update rate. Traditional GI methods have low SSIM values [Figs. 6(e)–6(g)] over the entire measuring period (1–6000 frames), with average values around 0.06, 0.05, and 0.23 for DGI, PGI, and CS-TV, respectively. The 1st and 6000th frames calculated using DGI, PGI, and CS-TV are shown as DGI I and DGI II, PGI I and PGI II, CS-TV I and CS-TV II; none of them shows a recognizable image, due to low sampling ratios and image disturbances.

In contrast, SEGI shows increased image quality [Figs. 6(c) and 6(d)] with increasing frame numbers. The SSIM value starts from a low value (about zero) and gradually increases to about 0.68 [6000th frame of SEGI-f). The 6000th frame [denoted “SEGI-r II” and “SEGI-f II” in Figs. 6(c) and 6(d)] shows a much clearer image than the first frame (denoted “SEGI-r I” and “SEGI-f I”) and the standard GI methods. The enhanced image quality benefits from the genetic frame-memory effect, where the updated illumination patterns in SEGI inherit most of the object image information from their parental generation. This, in turn, enables the image evolving process to mitigate the influence of disturbance introduced by moving particles. Here SEGI evolves one generation with 20 new patterns ($M = {20}$ and $N = {40}$, $\Delta G = {1}$) within one dynamic time unit.

To produce a more equitable comparison between SEGI and standard GI methods, we demonstrate “mode 2” [green line in Figs. 6(e)–6(g)], which has a much higher sampling ratio. In this mode, the sampling ratio increases with increasing frame number. For instance, the sampling ratio is 100% by the 50th frame, as 2% of samples will be measured within one frame. For the sampling ratio between 2% and 100%, increasing the sampling ratio is not able to improve image quality (as shown by the decreased SSIM) for PGI or CS-TV, as the dynamic particles prevent this. DGI, however, does have an increased SSIM with an increased sampling ratio. When the sampling ratio increases to more than 100%, image quality starts to improve [increase in SSIM, Figs. 6(e)–6(g)], as this results in repeatable information that uses an averaging process to reduce noise. As a result, DGI, PGI, and CS-TV could generate a recognizable image at the 1000th frame, with a sampling ratio of 2000%, as shown at DGI III, PGI III, and CS-TV III, respectively. SEGI-f III is the 1000th frame image by SEGI and still shows better image quality (SSIM 0.55) than standard GI methods. Note that with a sampling ratio of 2000%, the image recovery times for standard GI methods are extremely long, with 2.14 s, 114.54 s, and 27.98 s for DGI, PGI, and CS-TV, respectively. SEGI, on the other hand, does not require a recovery time. Additional imaging comparisons, including GI and CS-sparse, are shown in Supplement 1, Fig. S13.

To experimentally verify the SEGI concept, we use a DMD (ViALUX GmbH V-7001) for pattern projection, collecting light from the object on a silicon photodetector (Supplement 1, Fig. S14). Custom python-based algorithms are used to generate and update DMD patterns. To speed up data transfer to the DMD, we use a ${48} \times {48}\;{\rm pixel}$ area on the DMD for SEGI, with an illumination time of 94 µs per illumination frame and 25 ms per generation (40 Hz). Thus, the illumination time for each generation is 1.88 ms. The rest time ${\sim}{23}\;{\rm ms}$ of every generation time is consumed in pattern loading and calculation of the genetic algorithm (see Supplement 1, Fig. S19 for details). Figure 7(a) shows SEGI imaging results of a static 3D printed mask (${\sim}{8}\;{\rm mm}$ in diameter “smiling face”). With a pattern number $N = {20}$, the object outline appears when the generation number approaches 500 (see Supplement 1, Figs. S15–17 for more details). Figure 7(b) validates SEGI on imaging a moving object (1951 USAF target) with a translation speed of 50 µm/s [${\sim}{0.5}\%$ of the field of view (FOV) per second]. Four selected frames of dynamic SEGI indicates the position of object change from “1” to “4.”

 figure: Fig. 7.

Fig. 7. Experimental results of SEGI and comparison of SEGI and CGI in turbid media. (a) SEGI for static objects (${\sim}{10}\;{\rm mm}$ masks) with $N = 20$ and $G = 100, 200, 500, 2000$ from left to right, respectively, and $k = 2$. The results are with a Gaussian blur filter. (b) For dynamic SEGI, a 1951 USAF test target (R3L3S1N, Thorlabs) mounted on a translation stage (DDSM100, Thorlabs) is used for unidirectional motion across the FOV. Four selected frames of dynamic SEGI results are shown on the right, which correspond to the indicated positions in the colored position. (c) Schematic diagram of scattering imaging of the object “T” with turbid sand. The object is placed at the middle of a 10 cm long plastic tank (Supplement 1, Fig. S21). A magnetic stirrer is used to control the turbid level from “0” to “5” with increased rotation speed. The measured mean free path for turbid levels 1 to 5 are 592.3, 265.1, 109.5, 86.3 and 57.7 mm, respectively, as shown in Table 1 in Supplement 1. The example pictures show the camera images of the object without and with turbid disturbance. (d) Measured single-pixel intensities in traditional CGI with the same illumination patterns at different turbid levels. (e) Top: SSIM of SEGI and traditional GI at different turbid levels. Bottom: cost function enhancements in SEGI at different turbid levels. (f) Image results by SEGI, DGI, PGI, and CS-TwIST at different turbid levels. All images are taken with Gaussian filters.

Download Full Size | PPT Slide | PDF

Figures 7(c)–7(f) further examine the imaging power of SEGI through a dynamic turbid layer. Figure 7(c)i shows the experimental diagram where sand and water are placed in a small transparent tank. The size of sand grains is ${\sim}{0.1 {-} 2}\;{\rm mm}$ (Supplement 1, Fig. S20). The object (a ${\sim}{10}\;{\rm mm}$ “T” mask) is placed inside the tank, and the surrounding fluid becomes turbid when the stir bar inside the tank starts rotating. The turbidity level (denoted as ranging from zero to five) is controlled by the rotational speed of the bar. Figures 7(c)ii and 7(c)iii are photos of the tank without and with disturbance, respectively (photos of the tank for different turbid levels are shown in Supplement 1, Fig. S21). A higher turbid level (more randomly scattering particles moving with higher speeds) introduces random deviation with higher amplitude from true intensities to measured ones, decreasing the reconstruction image quality. Figure 7(d) indicates the measured single-pixel intensities with the same illumination patterns (used in CGI) at different turbid levels, where the turbid layer induced intensity fluctuation is as much as three times higher than the signal amplitude.

Figure 7(f) compares SEGI images with those reconstructed by traditional GI methods including DGI, PGI, and CS-TwIST. Here the CS-TwIST stands for a two-step iterative shrinkage/thresholding algorithm [39] with a total variation in the compressed sensing framework to provide state-of-the-art denoising. Here we use a 100% sampling ratio for DGI, and a 50% sampling ratio for PGI and CS-TwIST, as a higher sampling ratio will decrease the image quality for PGI and CS-TwIST. As shown in Supplement 1, Figs. S22 and S23, PGI, CS-sparse, CS-TV, and CS-TwIST show better imaging with a 50% sampling ratio than 100%, due to accumulation of noise: higher sampling ratios show a trade-off between more signal and more noise information. SEGI shows the best image quality. All methods produce recognizable images for lower turbid levels (${0}\sim{2}$), and SEGI and CS-TwIST recover a high-quality image. At turbid levels ${3}\sim{5}$, the image quality from traditional GI methods is decreased, SEGI can still resolve the basic shape of the object, and CS-TwIST shows considerable distortion. This inferior image quality for traditional GI methods stems from the scatter induced fluctuation and distorted illumination patterns. Once traditional GI methods have a measuring time of more than one dynamic time unit, a longer measuring time results in more fluctuation that disturbs the image. Keeping the measuring time within the dynamic time unit could help traditional GI methods to image the object obscured by moving sand. However, due to the high speed of the scattering particles (sand grains), the sampling ratio of traditional GI for one dynamic time unit (e.g., 0.87% for 2 ms) is too low to recover the object image. In contrast, SEGI can reduce the effect of the illumination patterns’ distortion/objects’ distortion due to the genetic frame-memory effect. Here we use a 10 kHz illumination frequency of the DMD for ${48} \times {48}\;{\rm pixel}$ patterns. The illumination time of each generation in SEGI is 1.88 ms, which enables the generation of effective offspring patterns if the disturbing particles maintain their static state with a time longer than 1.88 ms. Additional comparisons of imaging results, including CS-sparse and original GI method, are shown in Supplement 1, Figs. S22 and S23.

Figure 7(e)i shows the quantitative analysis of image quality (SSIM index) for SEGI, DGI, PGI, and CS-TwIST. SEGI shows a high ability to resist higher turbid levels, especially for turbid level 4. The CF values for SEGI in turbid levels ${0}\sim{5}$ [Fig. 7(e)ii] indicate the self-optimizing ability with a different environment toward high image quality. With turbid levels ${0}\sim{4}$, SEGI can continuously improve by itself over increasing generations. For turbid level 5, the self-improving pauses for 40,000 generations due to the improved turbidity, but the self-improvement continued again when the fluctuations became acceptable (from about 40,000th generation). This result proves SEGI is robust to environmental conditions.

3. DISCUSSION

In conclusion, we present a new class of GI, SEGI, that allows image formation of objects shown in the display patterns of a spatial light modulator without post-reconstruction. A genetic algorithm is used to generate “self-evolving patterns.” Genetic optimization methods such as microgenetic algorithms [40] may further improve the pattern generation efficiency of SEGI, reducing the required generation number. Other advanced feedback algorithms, including the simulated annealing method [41] and reinforced hybrid algorithm [42] can be directly incorporated into the strategy by replacing the genetic algorithm. A machine-learning-based approach could be implemented into the pattern optimization process by using trained neural networks [43]. Note that this work uses ${48} \times {48}\;{\rm pixels}$ in experiments for shorter DMD data loading and faster imaging rate. Images with more pixels could be conducted with a longer imaging time. Optimizing the controlling program and hardware could improve the imaging speed and image size. For instance, using a field-programmable gate array (FPGA) to transport data to the DMD could decrease time consumption by about 900 times [44]. Moreover, the 3D imaging ability could be further explored by combining SEGI with digital holography [45] or time-of-flight [46] techniques. The SEGI strategy can be directly adapted to other GI modalities, e.g., phase imaging [47] or optical encryption [48]. Due to the frame-memory effect within the self-evolving illumination patterns, SEGI can be further applied to improve single-pixel/ghost turbid imaging, especially in high-dynamic turbid conditions. The ability of reconstruction-free illumination shaping could be used to create an adaptive illumination pattern by locally adjusting the illumination intensities according to the sample, for low-irradiative biomedical imaging to reduce the photobleaching or phototoxicity in fluorescence microscopy [49]. Moreover, SEGI can physically project the object image by beam splitting the outgoing light from the illuminating spatial light modulator, which could be used for optical visualization of single-pixel imaging, especially for imaging at invisible wavelength bands. For example, by splitting the light beam to allow illumination of a transparent film embedded with upconversion materials doped with lanthanide ions [50], it would allow the direct visualization of near-infrared images [51]. We, therefore, anticipate that this work will open opportunities for developing integrated single-pixel cameras and novel GI based modalities.

Funding

Australian Research Council (DE200100074, DP190101058); China Scholarship Council (201607950009, 201706020170); University of Technology Sydney.

Acknowledgment

We thank Prof. Fengli Gao from Jilin University for the helpful discussion about PGI.

Disclosures

The authors declare no conflicts of interest.

Data availability

All relevant data are available from the corresponding authors upon request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009). [CrossRef]  

2. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]  

3. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019). [CrossRef]  

4. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016). [CrossRef]  

5. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016). [CrossRef]  

6. A. Zhang, Y. He, L. Wu, L. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5, 374–377 (2018). [CrossRef]  

7. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014). [CrossRef]  

8. R. I. Stantchev, D. B. Phillips, P. Hobson, S. M. Hornett, M. J. Padgett, and E. Hendry, “Compressed sensing with near-field THz radiation,” Optica 4, 989–992 (2017). [CrossRef]  

9. J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019). [CrossRef]  

10. S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020). [CrossRef]  

11. A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018). [CrossRef]  

12. Y. Wu, P. Ye, I. O. Mirza, G. R. Arce, and D. W. Prather, “Experimental demonstration of an optical-sectioning compressive sensing microscope (CSM),” Opt. Express 18, 24565–24578 (2010). [CrossRef]  

13. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012). [CrossRef]  

14. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016). [CrossRef]  

15. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7, 186–191 (2020). [CrossRef]  

16. M. Alemohammad, J. Shin, D. N. Tran, J. R. Stroud, S. P. Chin, T. D. Tran, and M. A. Foster, “Widefield compressive multiphoton microscopy,” Opt. Lett. 43, 2989–2992 (2018). [CrossRef]  

17. C. Wen, M. Ren, F. Feng, W. Chen, and S.-C. Chen, “Compressive sensing for fast 3-D and random-access two-photon microscopy,” Opt. Lett. 44, 4343–4346 (2019). [CrossRef]  

18. L. V. Amitonova and J. F. de Boer, “Compressive imaging through a multimode fiber,” Opt. Lett. 43, 5427–5430 (2018). [CrossRef]  

19. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018). [CrossRef]  

20. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010). [CrossRef]  

21. X.-R. Yao, W.-K. Yu, X.-F. Liu, L.-Z. Li, M.-F. Li, L.-A. Wu, and G.-J. Zhai, “Iterative denoising of ghost imaging,” Opt. Express 22, 24268 (2014). [CrossRef]  

22. C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22, 30063–30073 (2014). [CrossRef]  

23. W. Gong, “High-resolution pseudo-inverse ghost imaging,” Photon. Res. 3, 234–237 (2015). [CrossRef]  

24. X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018). [CrossRef]  

25. W. Wang, X. Hu, J. Liu, S. Zhang, J. Suo, and G. Situ, “Gerchberg-Saxton-like ghost imaging,” Opt. Express 23, 28416–28422 (2015). [CrossRef]  

26. M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013). [CrossRef]  

27. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6, 6225 (2015). [CrossRef]  

28. B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017). [CrossRef]  

29. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017). [CrossRef]  

30. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018). [CrossRef]  

31. Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018). [CrossRef]  

32. S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020). [CrossRef]  

33. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019). [CrossRef]  

34. N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019). [CrossRef]  

35. D. B. Conkey, A. N. Brown, A. M. Caravaca-Aguirre, and R. Piestun, “Genetic algorithm optimisation for focusing through turbid media in noisy environments,” Opt. Express 20, 4840–4849 (2012). [CrossRef]  

36. X. Zhang and P. Kner, “Binary wavefront optimisation using a genetic algorithm,” J. Opt. 16, 125704 (2014). [CrossRef]  

37. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007). [CrossRef]  

38. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35, 78–87 (2018). [CrossRef]  

39. J. Bioucas-Dias and M. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007). [CrossRef]  

40. B. R. Anderson, P. Price, R. Gunawidjaja, and H. Eilers, “Microgenetic optimisation algorithm for optimal wavefront shaping,” Appl. Opt. 54, 1485–1491 (2015). [CrossRef]  

41. M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018). [CrossRef]  

42. Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020). [CrossRef]  

43. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019). [CrossRef]  

44. S. Bi, N. Xi, K. W. C. Lai, and X. Pan, “Design and implementation for image reconstruction of compressive sensing using FPGA,” in IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (2013), pp. 320–325.

45. P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012). [CrossRef]  

46. A. Turpin, G. Musarra, V. Kapitany, F. Tonolini, A. Lyons, I. Starshynov, F. Villa, E. Conca, F. Fioranelli, R. Murray-Smith, and D. Faccio, “Spatial images from temporal data,” Optica 7, 900–905 (2020). [CrossRef]  

47. Y. Liu, J. Suo, Y. Zhang, and Q. Dai, “Single-pixel phase and fluorescence microscope,” Opt. Express 26, 32451–32462 (2018). [CrossRef]  

48. H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017). [CrossRef]  

49. N. Chakrova, A. S. Canton, C. Danelon, S. Stallinga, and B. Rieger, “Adaptive illumination reduces photobleaching in structured illumination microscopy,” Biomed. Opt. Express 7, 4263–4274 (2016). [CrossRef]  

50. B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020). [CrossRef]  

51. L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020). [CrossRef]  

References

  • View by:

  1. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
    [Crossref]
  2. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008).
    [Crossref]
  3. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019).
    [Crossref]
  4. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
    [Crossref]
  5. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
    [Crossref]
  6. A. Zhang, Y. He, L. Wu, L. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5, 374–377 (2018).
    [Crossref]
  7. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
    [Crossref]
  8. R. I. Stantchev, D. B. Phillips, P. Hobson, S. M. Hornett, M. J. Padgett, and E. Hendry, “Compressed sensing with near-field THz radiation,” Optica 4, 989–992 (2017).
    [Crossref]
  9. J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
    [Crossref]
  10. S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
    [Crossref]
  11. A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
    [Crossref]
  12. Y. Wu, P. Ye, I. O. Mirza, G. R. Arce, and D. W. Prather, “Experimental demonstration of an optical-sectioning compressive sensing microscope (CSM),” Opt. Express 18, 24565–24578 (2010).
    [Crossref]
  13. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
    [Crossref]
  14. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
    [Crossref]
  15. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7, 186–191 (2020).
    [Crossref]
  16. M. Alemohammad, J. Shin, D. N. Tran, J. R. Stroud, S. P. Chin, T. D. Tran, and M. A. Foster, “Widefield compressive multiphoton microscopy,” Opt. Lett. 43, 2989–2992 (2018).
    [Crossref]
  17. C. Wen, M. Ren, F. Feng, W. Chen, and S.-C. Chen, “Compressive sensing for fast 3-D and random-access two-photon microscopy,” Opt. Lett. 44, 4343–4346 (2019).
    [Crossref]
  18. L. V. Amitonova and J. F. de Boer, “Compressive imaging through a multimode fiber,” Opt. Lett. 43, 5427–5430 (2018).
    [Crossref]
  19. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
    [Crossref]
  20. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010).
    [Crossref]
  21. X.-R. Yao, W.-K. Yu, X.-F. Liu, L.-Z. Li, M.-F. Li, L.-A. Wu, and G.-J. Zhai, “Iterative denoising of ghost imaging,” Opt. Express 22, 24268 (2014).
    [Crossref]
  22. C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22, 30063–30073 (2014).
    [Crossref]
  23. W. Gong, “High-resolution pseudo-inverse ghost imaging,” Photon. Res. 3, 234–237 (2015).
    [Crossref]
  24. X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
    [Crossref]
  25. W. Wang, X. Hu, J. Liu, S. Zhang, J. Suo, and G. Situ, “Gerchberg-Saxton-like ghost imaging,” Opt. Express 23, 28416–28422 (2015).
    [Crossref]
  26. M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).
    [Crossref]
  27. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6, 6225 (2015).
    [Crossref]
  28. B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017).
    [Crossref]
  29. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
    [Crossref]
  30. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018).
    [Crossref]
  31. Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
    [Crossref]
  32. S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020).
    [Crossref]
  33. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
    [Crossref]
  34. N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
    [Crossref]
  35. D. B. Conkey, A. N. Brown, A. M. Caravaca-Aguirre, and R. Piestun, “Genetic algorithm optimisation for focusing through turbid media in noisy environments,” Opt. Express 20, 4840–4849 (2012).
    [Crossref]
  36. X. Zhang and P. Kner, “Binary wavefront optimisation using a genetic algorithm,” J. Opt. 16, 125704 (2014).
    [Crossref]
  37. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
    [Crossref]
  38. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35, 78–87 (2018).
    [Crossref]
  39. J. Bioucas-Dias and M. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
    [Crossref]
  40. B. R. Anderson, P. Price, R. Gunawidjaja, and H. Eilers, “Microgenetic optimisation algorithm for optimal wavefront shaping,” Appl. Opt. 54, 1485–1491 (2015).
    [Crossref]
  41. M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
    [Crossref]
  42. Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
    [Crossref]
  43. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
    [Crossref]
  44. S. Bi, N. Xi, K. W. C. Lai, and X. Pan, “Design and implementation for image reconstruction of compressive sensing using FPGA,” in IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (2013), pp. 320–325.
  45. P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012).
    [Crossref]
  46. A. Turpin, G. Musarra, V. Kapitany, F. Tonolini, A. Lyons, I. Starshynov, F. Villa, E. Conca, F. Fioranelli, R. Murray-Smith, and D. Faccio, “Spatial images from temporal data,” Optica 7, 900–905 (2020).
    [Crossref]
  47. Y. Liu, J. Suo, Y. Zhang, and Q. Dai, “Single-pixel phase and fluorescence microscope,” Opt. Express 26, 32451–32462 (2018).
    [Crossref]
  48. H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
    [Crossref]
  49. N. Chakrova, A. S. Canton, C. Danelon, S. Stallinga, and B. Rieger, “Adaptive illumination reduces photobleaching in structured illumination microscopy,” Biomed. Opt. Express 7, 4263–4274 (2016).
    [Crossref]
  50. B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
    [Crossref]
  51. L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
    [Crossref]

2020 (7)

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7, 186–191 (2020).
[Crossref]

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020).
[Crossref]

Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
[Crossref]

A. Turpin, G. Musarra, V. Kapitany, F. Tonolini, A. Lyons, I. Starshynov, F. Villa, E. Conca, F. Fioranelli, R. Murray-Smith, and D. Faccio, “Spatial images from temporal data,” Optica 7, 900–905 (2020).
[Crossref]

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

2019 (6)

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
[Crossref]

F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
[Crossref]

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

C. Wen, M. Ren, F. Feng, W. Chen, and S.-C. Chen, “Compressive sensing for fast 3-D and random-access two-photon microscopy,” Opt. Lett. 44, 4343–4346 (2019).
[Crossref]

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019).
[Crossref]

J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
[Crossref]

2018 (11)

A. Zhang, Y. He, L. Wu, L. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5, 374–377 (2018).
[Crossref]

L. V. Amitonova and J. F. de Boer, “Compressive imaging through a multimode fiber,” Opt. Lett. 43, 5427–5430 (2018).
[Crossref]

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018).
[Crossref]

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

M. Alemohammad, J. Shin, D. N. Tran, J. R. Stroud, S. P. Chin, T. D. Tran, and M. A. Foster, “Widefield compressive multiphoton microscopy,” Opt. Lett. 43, 2989–2992 (2018).
[Crossref]

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35, 78–87 (2018).
[Crossref]

Y. Liu, J. Suo, Y. Zhang, and Q. Dai, “Single-pixel phase and fluorescence microscope,” Opt. Express 26, 32451–32462 (2018).
[Crossref]

2017 (4)

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
[Crossref]

R. I. Stantchev, D. B. Phillips, P. Hobson, S. M. Hornett, M. J. Padgett, and E. Hendry, “Compressed sensing with near-field THz radiation,” Optica 4, 989–992 (2017).
[Crossref]

2016 (4)

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
[Crossref]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

N. Chakrova, A. S. Canton, C. Danelon, S. Stallinga, and B. Rieger, “Adaptive illumination reduces photobleaching in structured illumination microscopy,” Biomed. Opt. Express 7, 4263–4274 (2016).
[Crossref]

2015 (4)

2014 (4)

2013 (1)

M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).
[Crossref]

2012 (3)

D. B. Conkey, A. N. Brown, A. M. Caravaca-Aguirre, and R. Piestun, “Genetic algorithm optimisation for focusing through turbid media in noisy environments,” Opt. Express 20, 4840–4849 (2012).
[Crossref]

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012).
[Crossref]

2010 (2)

2009 (1)

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
[Crossref]

2008 (1)

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008).
[Crossref]

2007 (2)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

J. Bioucas-Dias and M. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

Alemohammad, M.

Amann, M.

M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).
[Crossref]

Amitonova, L. V.

Anderson, B. R.

Andrews, M.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Arce, G. R.

Avanaki, M. R. N.

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

Barbastathis, G.

Bayer, M.

M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).
[Crossref]

Bi, S.

S. Bi, N. Xi, K. W. C. Lai, and X. Pan, “Design and implementation for image reconstruction of compressive sensing using FPGA,” in IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (2013), pp. 320–325.

Bian, L.

Bioucas-Dias, J.

J. Bioucas-Dias and M. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

Bobin, J.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Bowman, R.

Boyd, R. W.

J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
[Crossref]

Bromberg, Y.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
[Crossref]

Brown, A. N.

Cai, J.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Candes, E.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Cantelli, V.

D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
[Crossref]

Canton, A. S.

Cao, J.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020).
[Crossref]

C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22, 30063–30073 (2014).
[Crossref]

Caravaca-Aguirre, A. M.

Cecconi, V.

Chahid, M.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Chakrova, N.

Chen, C.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Chen, F.

Chen, H.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

Chen, L.

Chen, M.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Chen, N.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
[Crossref]

Chen, S. C.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Chen, S.-C.

Chen, W.

Chin, S. P.

Clemente, P.

P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012).
[Crossref]

Conca, E.

Conkey, D. B.

Cutrona, A.

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Dahan, M.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Dai, Q.

Danelon, C.

de Boer, J. F.

Dholakia, K.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Di, X.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Ding, H.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Dong, G.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

Du, G.

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

Du, L. H.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Duran, V.

P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012).
[Crossref]

Edgar, M. P.

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018).
[Crossref]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Eilers, H.

Escobet-Montalban, A.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Faccio, D.

Fatima, A.

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

Fayyaz, Z.

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

Feng, F.

Feng, Z.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Ferri, F.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010).
[Crossref]

Figueiredo, M.

J. Bioucas-Dias and M. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

Fioranelli, F.

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Foster, M. A.

Fujiu, K.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Gao, F.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22, 30063–30073 (2014).
[Crossref]

Gao, L.

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Gatti, A.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010).
[Crossref]

Gibson, G. M.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019).
[Crossref]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Gong, W.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

W. Gong, “High-resolution pseudo-inverse ghost imaging,” Photon. Res. 3, 234–237 (2015).
[Crossref]

Gongora, J. S. T.

Guan, C.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Guan, J.

Gunawidjaja, R.

Guo, Q.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Guo, S.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22, 30063–30073 (2014).
[Crossref]

Han, S.

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

Hao, Q.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020).
[Crossref]

Hashimoto, K.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

He, K.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

He, Y.

A. Zhang, Y. He, L. Wu, L. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5, 374–377 (2018).
[Crossref]

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

Hendry, E.

Herrington, C. S.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Higham, C. F.

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018).
[Crossref]

Hobson, P.

Horisaki, R.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Hornett, S. M.

Hu, X.

Jin, D.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Johnson, S. D.

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

Ju, L. A.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Kamesawa, R.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Kapitany, V.

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Katz, O.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
[Crossref]

Kawamura, Y.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Kner, P.

X. Zhang and P. Kner, “Binary wavefront optimisation using a genetic algorithm,” J. Opt. 16, 125704 (2014).
[Crossref]

Lai, K. W. C.

S. Bi, N. Xi, K. W. C. Lai, and X. Pan, “Design and implementation for image reconstruction of compressive sensing using FPGA,” in IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (2013), pp. 320–325.

Lai, P.

Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
[Crossref]

Lamb, R.

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Lancis, J.

P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012).
[Crossref]

Li, G.

F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
[Crossref]

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Li, H.

Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
[Crossref]

Li, J.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Li, L.-Z.

Li, M.-F.

Li, S.

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Li, Z. R.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Liao, J.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Liu, B.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Liu, B.-L.

B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017).
[Crossref]

Liu, H. C.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Liu, J.

Liu, X.

B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017).
[Crossref]

Liu, X.-F.

Liu, Y.

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Y. Liu, J. Suo, Y. Zhang, and Q. Dai, “Single-pixel phase and fluorescence microscope,” Opt. Express 26, 32451–32462 (2018).
[Crossref]

Lu, R.

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

Lugiato, L. A.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010).
[Crossref]

Luo, Y.

Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
[Crossref]

Lv, X.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

Lyons, A.

Lyu, M.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
[Crossref]

Ma, C.

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Ma, X.

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6, 6225 (2015).
[Crossref]

Ma, Y.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Magatti, D.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010).
[Crossref]

Mazilu, M.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Mi, C.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Mirza, I. O.

Mitchell, K. J.

Mohammadian, N.

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

Mousavi, H. S.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Mühlenbernd, H.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Murray-Smith, R.

A. Turpin, G. Musarra, V. Kapitany, F. Tonolini, A. Lyons, I. Starshynov, F. Villa, E. Conca, F. Fioranelli, R. Murray-Smith, and D. Faccio, “Spatial images from temporal data,” Optica 7, 900–905 (2020).
[Crossref]

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018).
[Crossref]

Musarra, G.

Noji, H.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Olivieri, L.

Ota, S.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Ozcan, A.

Padgett, M. J.

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018).
[Crossref]

R. I. Stantchev, D. B. Phillips, P. Hobson, S. M. Hornett, M. J. Padgett, and E. Hendry, “Compressed sensing with near-field THz radiation,” Optica 4, 989–992 (2017).
[Crossref]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Paganin, D. M.

D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
[Crossref]

Pan, X.

S. Bi, N. Xi, K. W. C. Lai, and X. Pan, “Design and implementation for image reconstruction of compressive sensing using FPGA,” in IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (2013), pp. 320–325.

Pasquazi, A.

Peccianti, M.

Pelliccia, D.

D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
[Crossref]

Peters, L.

Phillips, D. B.

Piestun, R.

Prather, D. W.

Price, P.

Qiu, C. W.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Rack, A.

D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
[Crossref]

Radwell, N.

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014).
[Crossref]

Rahimi Tabar, M. R.

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

Ren, M.

Rieger, B.

Rizvi, S.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020).
[Crossref]

Saber, W. A.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Salimi, F.

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

Sato, I.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Scheel, M.

D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
[Crossref]

Setoyama, K.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Shan, X.

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Shapiro, J. H.

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008).
[Crossref]

Shi, J.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Shin, J.

Silberberg, Y.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
[Crossref]

Situ, G.

Song, J.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

Spesyvtsev, R.

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Stallinga, S.

Stantchev, R. I.

Starshynov, I.

Stroud, J. R.

Studer, V.

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Su, Q. P.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Sun, B.

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Sun, M. J.

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Suo, J.

Tajahuerce, E.

P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012).
[Crossref]

Tan, W.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Tonolini, F.

Tran, D. N.

Tran, T. D.

Tucker, R.

Tunesi, J.

Turpin, A.

Ugawa, M.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Villa, F.

Waki, K.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Wang, B.

Wang, C.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

Wang, F.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
[Crossref]

Wang, G.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

Wang, H.

Wang, W.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
[Crossref]

W. Wang, X. Hu, J. Liu, S. Zhang, J. Suo, and G. Situ, “Gerchberg-Saxton-like ghost imaging,” Opt. Express 23, 28416–28422 (2015).
[Crossref]

Wen, C.

Wen, S.

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Williams, K.

J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
[Crossref]

Wu, L.

Wu, L.-A.

B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017).
[Crossref]

X.-R. Yao, W.-K. Yu, X.-F. Liu, L.-Z. Li, M.-F. Li, L.-A. Wu, and G.-J. Zhai, “Iterative denoising of ghost imaging,” Opt. Express 22, 24268 (2014).
[Crossref]

Wu, Y.

Xi, N.

S. Bi, N. Xi, K. W. C. Lai, and X. Pan, “Design and implementation for image reconstruction of compressive sensing using FPGA,” in IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (2013), pp. 320–325.

Xiao, T.

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

Xie, H.

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

Xu, X.

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Xu, Z.

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

Yamaguchi, S.

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Yan, S.

Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
[Crossref]

Yang, B.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Yang, C.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

Yang, Z.-H.

B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017).
[Crossref]

Yao, X.-R.

Ye, P.

Yiwen, E.

J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
[Crossref]

Yu, H.

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

Yu, W.-K.

Zentgraf, T.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Zhai, G.-J.

Zhai, Z. H.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Zhang, A.

A. Zhang, Y. He, L. Wu, L. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5, 374–377 (2018).
[Crossref]

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

Zhang, C.

Zhang, H.

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

Zhang, K.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020).
[Crossref]

Zhang, S.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

W. Wang, X. Hu, J. Liu, S. Zhang, J. Suo, and G. Situ, “Gerchberg-Saxton-like ghost imaging,” Opt. Express 23, 28416–28422 (2015).
[Crossref]

Zhang, X.

X. Zhang and P. Kner, “Binary wavefront optimisation using a genetic algorithm,” J. Opt. 16, 125704 (2014).
[Crossref]

Zhang, X. C.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
[Crossref]

Zhang, Y.

Zhang, Z.

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6, 6225 (2015).
[Crossref]

Zhao, J.

J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
[Crossref]

Zheng, G.

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

Zheng, Y.

Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
[Crossref]

Zhong, J.

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6, 6225 (2015).
[Crossref]

Zhu, D.

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

Zhu, L. G.

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Zhu, S.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

APL Photon. (1)

Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photon. 5, 016109 (2020).
[Crossref]

Appl. Opt. (1)

Appl. Phys. Lett. (1)

N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimised single-pixel LiDAR,” Appl. Phys. Lett. 115, 231101 (2019).
[Crossref]

Biomed. Opt. Express (1)

IEEE Photon. J. (1)

X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudo inverse ghost imaging,” IEEE Photon. J. 10, 3900708 (2018).
[Crossref]

IEEE Trans. Image Process. (2)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

J. Bioucas-Dias and M. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007).
[Crossref]

J. Mod. Opt. (1)

B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64, 259–264 (2017).
[Crossref]

J. Opt. (1)

X. Zhang and P. Kner, “Binary wavefront optimisation using a genetic algorithm,” J. Opt. 16, 125704 (2014).
[Crossref]

J. Opt. Soc. Am. A (1)

Light Sci. Appl. (2)

J. Zhao, E. Yiwen, K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light Sci. Appl. 8, 55 (2019).
[Crossref]

S. C. Chen, Z. Feng, J. Li, W. Tan, L. H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z. H. Zhai, Z. R. Li, C. W. Qiu, X. C. Zhang, and L. G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light Sci. Appl. 9, 99 (2020).
[Crossref]

Nano Lett. (1)

B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20, 4775–4781 (2020).
[Crossref]

Nanoscale (1)

L. Gao, X. Shan, X. Xu, Y. Liu, B. Liu, S. Li, S. Wen, C. Ma, D. Jin, and F. Wang, “Video-rate upconversion display from optimised lanthanide ion doped upconversion nanoparticles,” Nanoscale 12, 18595–18599 (2020).
[Crossref]

Nat. Commun. (2)

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6, 6225 (2015).
[Crossref]

Nat. Photonics (1)

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019).
[Crossref]

Opt. Express (7)

Opt. Lett. (3)

Optica (6)

Photon. Res. (1)

Phys. Rev. A (3)

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
[Crossref]

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008).
[Crossref]

P. Clemente, V. Duran, E. Tajahuerce, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012).
[Crossref]

Phys. Rev. Lett. (3)

H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117, 113901 (2016).
[Crossref]

D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016).
[Crossref]

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010).
[Crossref]

Proc. Natl. Acad. Sci. USA (1)

V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012).
[Crossref]

Proc. SPIE (1)

M. R. N. Avanaki, Z. Fayyaz, F. Salimi, N. Mohammadian, A. Fatima, and M. R. Rahimi Tabar, “Wavefront shaping using simulated annealing algorithm for focusing light through turbid media,” Proc. SPIE 10494, 104946M (2018).
[Crossref]

Sci. Adv. (2)

H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3, e1701477 (2017).
[Crossref]

A. Escobet-Montalban, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, eaau1338 (2018).
[Crossref]

Sci. Rep. (5)

M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8, 2369 (2018).
[Crossref]

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8, 6469 (2018).
[Crossref]

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10, 11400 (2020).
[Crossref]

Science (1)

S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360, 1246–1251 (2018).
[Crossref]

Other (1)

S. Bi, N. Xi, K. W. C. Lai, and X. Pan, “Design and implementation for image reconstruction of compressive sensing using FPGA,” in IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (2013), pp. 320–325.

Supplementary Material (5)

NameDescription
Supplement 1       Supplemental document
Visualization 1       Comparison between SEGI and standard GI methods for a binary object.
Visualization 2       Comparison between SEGI and standard GI methods for a grayscale object, with a sampling ratio of 36.6%.
Visualization 3       Comparison between SEGI and standard GI methods for a grayscale object, with a sampling ratio of 61.04%.
Visualization 4       SEGI with non-periodic moving binary and grayscale objects.

Data availability

All relevant data are available from the corresponding authors upon request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Working principle of SEGI with a genetic algorithm. (a) Schematic diagram of SEGI. A population of $N$ random parent patterns is generated to illuminate the object. After the cost function measurement, these patterns are ranked for breeding $N/{2}$ offspring patterns. Each new pattern is created by combining the ma and pa patterns with a random breeding template, followed by a mutation operation of the pixel values. The mutated offspring patterns replace the lowest-ranked $M\;(N/{2})$ parent generation patterns. The steps are repeated in every generation. Finally, the illumination patterns evolve towards an image of the object (bottom left), as the cost function values continue to increase (bottom right, derived from the evolving patterns in SEGI). (b) Schematic of traditional computational ghost imaging with predetermined patterns and measured single-pixel signals (fluctuating intensity distribution of ghost imaging with random patterns). (c) Example of SEGI results from increased numbers of generation with a population of 30. $G$ is the generation number.
Fig. 2.
Fig. 2. Numerical results of SEGI for a binary object with $k = 1$. (a) Original object. (b) SEGI raw (top) and median filtered results (bottom) after ${G = {1000}}$ in a population of 30. (c) PSNR of SEGI raw results with (top) population number range from 10 to 50 and (bottom) generation number range from 10 to 10,000. (d) Example of object frames (52nd and 112th) in a 112-frame image series and corresponding SEGI raw and median filtered results (e) with $\Delta G = 100$ and $N = {30}$. (f) PSNR and CF enhancement values of the dynamic SEGI. A video of the moving object is shown in Visualization 1.
Fig. 3.
Fig. 3. Characteristic of different $k$ values in the designed cost function. (a) SEGI raw results for static imaging after 1000 generations when $k = 2, 3, 4$. (b) SEGI raw results of 112th frame for dynamic imaging with different $k$ values ($\Delta G = 100$) and corresponding median filtered images (c). The movement is the same as that in Fig. 2. (d) PSNR of SEGI results for static and dynamic imaging in the evolution processes. The population number $N$ is 30 in (a)–(d). The movement of the object is the same as that in Fig. 2.
Fig. 4.
Fig. 4. Numerical results of SEGI for a grayscale object. (a) Original object image with coded pseudocolor (left) and filtered SEGI results with $G = \;200$, $N = \;40$ (middle), and $G = 300$, $N = 100$ (right). (b), (c) 162nd and 256th frames selected in a 256-frame object image series (left) and corresponding filtered SEGI results with $\Delta G = \;100$, $N = \;30$ (middle) and $\Delta G = 200,\;N = 100$ (right). (d) SSIM values of raw and filtered SEGI results for dynamic imaging (see Visualization 2 and Visualization 3 for the first two cases).
Fig. 5.
Fig. 5. Numerical comparison between imaging results of grayscale objects with sampling ratio of (a) 36.6% with $N = {30}$ and (b) 61% with $N = {50}$. The imaging method includes GI, DGI, PGI, CS-sparse, CS-TV, and SEGI. ${\rm SEGI}_{\rm ini}$ is the first imaging frame in the “initial evolving” state, and the SEGI is the 183th image in the “imaging state.”
Fig. 6.
Fig. 6. Comparison of SEGI and CGI in dynamic particle disturbances condition. (a) The Cameraman image is used as the original object. (b) Example frames with dynamic particle disturbance. During the measurement time of both methods, the target is scattered by randomly distributed particles with a frequency of 122 Hz and ${\sim}{0.1}\%$ additional detection noise. The illumination patterns have a frequency of 10 kHz. The SSIM indices, compared with the original target, of SEGI-r (raw) and SEGI-f (BM3D filtered) are shown on the left of (c) and (d). The first, 6000th, and 1000th SEGI images are shown on the right. Here $M = {20}$ and $N = {40}$ in SEGI. (e)–(g) Results acquired from DGI, PGI, and CS-TV methods. “Mode 1” (orange line) recovers the image from the measurements in each dynamic time unit. The images recovered from the first and 6000th images (with a sampling ratio of 2%) are shown in the middle. “Mode 2” (green line) combines all former and current measurements and corresponding illumination patterns to recover the image. The 1000th image (with a sampling ratio of 2000%) and corresponding computation time are shown on the right. The dynamic time unit means that disturbing particles maintain a static state in this period.
Fig. 7.
Fig. 7. Experimental results of SEGI and comparison of SEGI and CGI in turbid media. (a) SEGI for static objects (${\sim}{10}\;{\rm mm}$ masks) with $N = 20$ and $G = 100, 200, 500, 2000$ from left to right, respectively, and $k = 2$. The results are with a Gaussian blur filter. (b) For dynamic SEGI, a 1951 USAF test target (R3L3S1N, Thorlabs) mounted on a translation stage (DDSM100, Thorlabs) is used for unidirectional motion across the FOV. Four selected frames of dynamic SEGI results are shown on the right, which correspond to the indicated positions in the colored position. (c) Schematic diagram of scattering imaging of the object “T” with turbid sand. The object is placed at the middle of a 10 cm long plastic tank (Supplement 1, Fig. S21). A magnetic stirrer is used to control the turbid level from “0” to “5” with increased rotation speed. The measured mean free path for turbid levels 1 to 5 are 592.3, 265.1, 109.5, 86.3 and 57.7 mm, respectively, as shown in Table 1 in Supplement 1. The example pictures show the camera images of the object without and with turbid disturbance. (d) Measured single-pixel intensities in traditional CGI with the same illumination patterns at different turbid levels. (e) Top: SSIM of SEGI and traditional GI at different turbid levels. Bottom: cost function enhancements in SEGI at different turbid levels. (f) Image results by SEGI, DGI, PGI, and CS-TwIST at different turbid levels. All images are taken with Gaussian filters.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

S i = I i ( x , y ) O ( x , y ) d x d y ,
R ( x , y ) = S I ( x , y ) S I ( x , y ) ,
C F ( i , g ) = ( S i g ) k p = 1 P x I i 1 ( x ( p ) , y ( p ) ) ( S i 1 ) k p = 1 P x I i g ( x ( p ) , y ( p ) ) ,
C F ( i , g ) = ( S i g ) 2 p = 1 P x ( I i 1 ( x ( p ) , y ( p ) ) ) 2 ( S i 1 ) 2 p = 1 P x ( I i g ( x ( p ) , y ( p ) ) ) 2 .

Metrics