## Abstract

In single-pixel imaging (SPI), a large number of illuminations is usually required to capture one single image. Consequently, SPI may only achieve a very low frame rate for a fast-moving object and the reconstructed images are contaminated with blur and noise. In previous works, some attempts are made to perform motion estimation between neighboring frames in a SPI video to enhance the image quality. However, the motion estimation and quality enhancement from one single image frame in dynamic SPI was seldom investigated. In this work, it assumed that some prior knowledge about the type of motion the object undergoes is known. A motion model of the target object is constructed and the motion parameters can be optimized within a search space. Our proposed scheme is different from common motion deblur techniques for photographs since the motion blur mechanism in SPI is significantly different from a conventional camera. Experimental results demonstrate that the reconstructed images with our proposed scheme in dynamic SPI have much better quality.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

In a conventional camera, a pixelated sensor array is usually utilized to capture a two-dimensional image of the target object. As a novel computational imaging technique, single-pixel imaging (SPI) [1–5] only requires a sensor with one single pixel for recording a photograph with two-dimensional spatial resolution. The object is sequentially illuminated with varying patterns and the total light intensity of the entire object scene is recorded as a single-pixel value by the detector at each time. Finally, the object image is computationally reconstructed from both the recorded single-pixel intensity sequence and the illumination patterns. A typical optical setup for SPI is shown in Fig. 1. SPI exhibits substantial advantages in some invisible wavebands where conventional pixelated sensor arrays are expensive or even not available [4]. In addition, SPI can realize indirect-line-of-sight imaging and imaging under weak light conditions. The potential applications of SPI include but not limited to remote sensing [6], three-dimensional measurement [7,8], microscopy [9,10], image encryption [11–13], lidar detection [14,15], gas leak monitoring [16] and tomography [17].

Despite the success, SPI has one major drawback of low imaging efficiency. Common illumination devices, such as spatial light modulator (SLM) or digital micromirror device (DMD), can only project a limited number of patterns within a certain time period, it is difficult for a SPI system to capture many images within a short time. This problem can be addressed to certain extent by designing optimized illumination patterns (e.g. Fourier transform patterns [18], Hadamard transform patterns [19], principal component patterns [20], wavelet patterns [21], sub-pixel shifted patterns [22]), utilizing a multi-element detector [23] and improved reconstruction algorithms [24]. However, a considerable number of illuminations and recordings are often still required for capturing one single image in SPI. The frame rate for capturing a SPI video of a dynamic object scene can be rather low. In this work, the capture of a single image or a video clip with SPI for a non-static object moving sufficiently fast, compared with the frame rate of the SPI system, is referred to as dynamic SPI. If the target object is moving sufficiently fast within the imaging period of one image frame in the video, the reconstructed image from SPI can suffer from severe quality degradation such as blur and noise. In some previous works [25,26], motion estimation is performed between consecutive video frames to recover high-quality reconstructed images for low frame rate SPI. However, the motion estimation and quality enhancement from only one single image frame in dynamic SPI was seldom investigated in the past. In addition, the common single-image deblurring techniques for photographs captured by conventional cameras, such as Wiener filter, regularized filter and Lucy-Richardson method [27,28], are not directly applicable to the reconstructed images in SPI. The reason is that the motion blur mechanism in a SPI system is significantly different from that of a conventional camera. We propose a tailor-made scheme for modeling the motion blur in SPI and reduce the motion blur effect from one single image in SPI for the first time in this work.

In our proposed scheme, it is considered that if the object is moving during pattern illuminations in SPI (the type of motion is known), it is equivalent that the object is static but the illumination patterns are geometrically transformed in a reverse manner at different time points. Then the object image can be reconstructed from the recorded single-pixel data sequence using transformed illumination patterns, instead of the original illumination patterns. The motion parameters in transforming the patterns, such as moving or rotating speed, can be optimized by evaluating the quality of reconstructed image using certain metric functions. Finally, the object image can be reconstructed with high fidelity using our optimized parameters.

## 2. Principles of our proposed scheme

#### 2.1. Model of single-pixel imaging

In single-pixel imaging, the target object image is assumed to be $O(n)$ and the illumination patterns are assumed to be Hadamard patterns $H(m,n)$. The total number of pixels in the object image and in each illumination pattern is assumed to be N. The intensity of each pixel in the object image $O(n)$ can be represented as a column matrix (or column vector) with length N, where n denotes the index of each pixel ($1<=n<=N$). The number of illumination patterns is assumed to be M and M equals N if the sampling ratio is equal to 1. Each two-dimensional illumination pattern can be represented as a row vector with length N. $H(m,n)$ denotes the ${n}_{th}$ pixel (or element) in the ${m}_{th}$ illumination pattern ($1<=m<=M$,$1<=n<=N$). All M illumination patterns constitute a matrix H consisting of M rows and N columns. The recorded single-pixel intensity sequence $I(m)$ can be represented as a column vector of length M ($1<=m<=M$), which equals the multiplication of H and O, given by Eqs. (1) and (2).

The object image can be computationally recovered from I and H with various algorithms [24]. The alternating projection algorithm [24,29] is employed in this work to reconstruct the object image, which can yield more accurate results with than direct inverse Hadamard transform.

However, if the object image is non-static while the ${1}_{th}$, ${2}_{nd}$, ${3}_{rd}$, …, ${M}_{th}$ patterns are illuminated sequentially, the recorded single-pixel intensity sequence contains mixed information of object images in different motion status. The original object image at a certain time point cannot be reconstructed with high fidelity. To address this problem, we propose a scheme of motion estimation and quality enhancement for a dynamic scene in SPI.

#### 2.2. Proposed scheme of motion estimation and quality enhancement

### 2.2.1. Model of motion estimation

In this work, we consider that if the object is translationally shifting, rotating around a certain center point or geometrically transformed in other ways, it is equivalent that the object is static, but the illumination patterns are geometrically transformed in an inverse manner. For example, in Fig. 2, the object is translationally shifting to the right direction during the illumination of varying Hadamard patterns.

It is equivalent that the object is static and the Hadamard illumination patterns are translationally shifted to the left direction. If the object is rotating clockwise, equivalently the illumination patterns are rotating counter-clockwise, shown in Fig. 3.

In practice, we may have some prior knowledge about the type of movement the object undergoes (e.g. it is shifting or rotating or moving in other ways), but the exact motion parameters (such as moving or rotation speed) are not known in advance. It can be assumed that within the imaging time of one image frame, the motion parameters of the object remain constant and a rough range of parameter values is known in advance.

It is supposed that the time interval between two subsequent illumination patterns within one frame is $\Delta t$. The object is translationally shifting and the moving speed in x direction and y direction is ${v}_{x}$ and ${v}_{y}$ respectively. The illumination patterns will be geometrically transformed in the following way: the first illumination pattern (denoted by the first row in H) remains unchanged, the second illumination pattern (denoted by the second row in H) will be shifted by $-{v}_{x}\Delta t$ and $-{v}_{y}\Delta t$ in two directions respectively, …, and the ${K}_{th}$ illumination pattern will be shifted by $-(K-1){v}_{x}\Delta t$ and $-(K-1){v}_{y}\Delta t$ in two directions. Then the new illumination matrix after transformation, ${H}_{new}$ can be expressed as Eq. (3).

where ${H}_{new}$ denotes the illumination pattern matrix after the transform and T denotes the transform operation. The object image ${O}_{new}({v}_{x},{v}_{y})$ will be reconstructed from ${H}_{new}$ (instead of original H) and I. If the estimated ${v}_{x}$ and ${v}_{y}$ are identical or close to the actual moving speed of the target object, the object image can be recovered with good fidelity. Otherwise, the reconstructed image may still suffer from noise and degradation. A rough range of possible values of ${v}_{x}$ and ${v}_{y}$ can be pre-defined and a search space can be constructed.For other types of motion in addition to translational shift, the parameters can be optimized in a similar way. For example, if the object is rotating, then three parameters, x-coordinate of the rotation center ${P}_{x}$, y-coordinate of the rotation center ${P}_{y}$ and the rotation speed $w$ need to be optimized simultaneously, given by Eq. (4).

### 2.2.2. Optimization of motion parameters

The optimal motion parameters can be determined in the following way in our scheme, shown in Fig. 4.

The optimized ${H}_{new}({v}_{x},{v}_{y})$ and ${O}_{new}({v}_{x},{v}_{y})$ can be obtained by finding the optimal ${v}_{x}$ and ${v}_{y}$ in the search space. An image quality metric function $E[{O}_{new}({v}_{x},{v}_{y})]$ is used to evaluate the quality of each reconstructed image with different motion parameters in the optimization. In this work, the metric function is defined as the image variance, given by Eq. (5).

where $Var[]$ denotes the average variance of all pixel intensity values of one image statistically. This metric can be used to measure the level of noise and blur in an image. $E[{O}_{new}({v}_{x},{v}_{y})]$ becomes lower if ${O}_{new}({v}_{x},{v}_{y})$ contains more noise and blur in our examples. In previous works, similar metrics have been employed for the distinction between sharp in-focus images and blurred defocus images [30,31]. A reconstructed image with maximum $E[{O}_{new}({v}_{x},{v}_{y})]$ (minimum amount of noise) will be obtained when ${v}_{x}$ and ${v}_{y}$ reach the optimal values.In this work, simple genetic algorithm (SGA) [32] is adopted for the optimization of motion parameters. Other heuristic global optimization algorithms such as simulated annealing algorithm [33] and artificial bee colony algorithm (ABC) [34] can be attempted for this problem as well. As a classical optimization method, the working principles of genetic algorithm have been widely reported in many past literatures. In this work, we shall only briefly describe how SGA is used to solve our problem and omit the details.

First, the value of each motion parameters (such as ${v}_{x}$ and ${v}_{y}$ in translational shifting movement or ${P}_{x}$,${P}_{y}$ and w in rotating movement) is encoded into a binary bit sequence. For example, 8 bits are used to represent 255 possible values for ${v}_{x}$ within the search range [-31 31]. The binary bit sequences representing all the parameters are concatenated to constitute a single binary sequence, referred to as an individual in the algorithm. The population consists of a certain number of individuals (e.g. 30), which is referred to as the population size. Initially, each individual inside the population is a random binary bit sequence, representing random motion parameters. Then the fitness of each individual, which is the quality of reconstructed image measured by the metric function in our problem, is evaluated. According to the fitness values, selection, crossover and mutation operations are implemented on the individuals inside the population. The individuals in the population are updated and the fitness values are evaluated again in the next iteration. The population remains evolving and the individuals remain being updated after each iteration. Finally, after a certain number of iterations, one individual corresponding to optimal motion parameters, that yields a reconstructed image with best fidelity, can be obtained and the search process is finished.

After the SGA optimization based on the motion model and metric functions, a reconstructed image with enhanced quality can be obtained with the optimized motion parameters.

## 3. Results and discussion

#### 3.1. Simulation results

In the simulation, the object images and illumination patterns contain 32 × 32 pixels (N = 1024). The total number of illuminations is assumed to be M = 1024 and the sampling ratio is 1. The illumination patterns are commonly used Hadamard patterns.

In the first example, the frame rate is one frame per second. The object (a car) is moving from left to right at four different speeds (0 pixels per second, 1 pixel per second, 8 pixels per second, 15 pixels per second). The motion status of the object before the first illumination and after the last illumination (1024th illumination pattern) in one image frame is shown in Fig. 5(a) and Fig. 5(b). The reconstructed images from the recorded single-pixel intensity data and standard Hadamard patterns with conventional methods are shown in Fig. 5(c). It can be observed that the reconstructed images have good quality for static or slowly moving objects (Example 1.1 and 1.2). But the reconstructed images are noisy and blurred for fast moving objects (Example 1.3 and 1.4). The quality of reconstructed results is increasingly more degraded as the speed increases. In the conventional reconstruction method, ${v}_{x}$ and ${v}_{y}$ are both considered to be 0, which is identical or very close to the actual moving speed for a static or very slowly moving object (Example 1.1 and 1.2). Consequently, there is no need to estimate the actual motion parameters with our proposed scheme. However, the assumption that ${v}_{x}$ and ${v}_{y}$ are both 0 is obviously invalid for fast moving objects (Example 1.3, 1.4 and 1.5) and motion estimation with our proposed scheme is necessary. The reconstruction results with our proposed scheme for fast moving objects are shown in Fig. 5(d). The search range for ${v}_{x}$ and ${v}_{y}$ in SGA is set to be [-31 31] and [-31 31] (unit: pixels/s). Each parameter is represented by 8 bits and the population size is 30 in SGA optimization. The maximum number of iterations in SGA is set to be 10. The results illustrate that the reconstructed images in our scheme have significantly improved image quality than those obtained by the conventional method. The reconstructed images have similar quality in our scheme for varying speeds since it takes equal amount of calculation to estimate a high-speed value or a low-speed value. In these examples above, the object moves in the y direction with varying speeds and our proposed scheme can achieve similar performance when the object moves in the x direction.

If the object is moving in both x direction and y direction (${v}_{x}$ = 5, ${v}_{y}$ = 10), the reconstructed results are shown in Example 1.5 of Fig. 5 and our proposed scheme can still perform accurate motion estimation and significantly enhance the image quality. The estimated motion parameter and true motion parameters are compared in Table 1 and their values are quite close. The causes of error in the estimated values are analyzed at the end of Section 3.1.

In the second example, the assumed frame rate is one frame per second as well. The object (airplane) is rotating clockwise around a center point (8,24) at four different speeds (0 degrees per second, 5 degrees per second, 15 degrees per second, 30 degrees per second). The motion status of the object at the start and the end of a video frame is shown in Fig. 6(a) and Fig. 6(b). The reconstructed images from the recorded single-pixel intensity data and standard Hadamard patterns with conventional methods are shown in Fig. 6(c). Similar to Fig. 5, it can be observed that when the object is static and rotating very slowly, the conventionally reconstructed images have good quality. However, when the object is rotating fast, the conventionally reconstructed images are heavily degraded. Then the object images are reconstructed with our proposed scheme for fast moving situations (Example 1.3, 1.4 and 1.5). In the optimization, the search range of${P}_{x}$, ${P}_{y}$ and w is [1,32], [1,32] and [-100 100] (Unit: degrees/s) respectively. Each parameter is represented by 8 bits and the population size is 50 in SGA optimization. The maximum number of iterations in SGA is set to be 10. The results in Fig. 6(d) exhibits significant better quality than Fig. 6(c). The estimated motion parameters are close to the actual values, with minor discrepancies, illustrated in Table 2.

Compared with the first example, three unknown motion parameters instead of two unknown parameters need to be optimized in this example. The population size in SGA is increased from 30 to 50 due to the enlargement of search space. The reconstructed images with the conventional method contain blurred object and some surrounding noise points in both Fig. 5 and Fig. 6 for translational shifting movement and rotating movement. Both low frequency and high frequency noise is introduced since the recorded single-pixel intensity sequence (i.e. Hadamard spectrum) is distorted for a fast-moving object. The performances of our proposed method in the restoration of reconstructed images in translational shifting movement and rotating movement with varying speeds are similar. The images of original target objects can be recovered with good fidelity by maximumly reducing the blur and noise.

In addition, the simulation for a rotating disk printed with white number digits rotating around its center at the specified speeds is performed, which models the optical experiment in Section 3.2. Each object image (32 × 32 pixels) contains a digital number and the number is rotating around a center point outside the image, shown in Fig. 7 and Fig. 8(a). The position of rotating center is not known and the search range of ${P}_{x}$ and ${P}_{y}$ is [0 32] and [-100 −20] in our scheme, shown in Fig. 7.

In our proposed scheme, the search range for the rotating speed w is [0 80] (Unit: rounds/s). The actual rotating speed of the object is assumed to be 0, 2, 4 and 8 rounds/s respectively. The number of illumination patterns projected within one second is 500000. The reconstructed images with conventional method are shown in Fig. 8(b), Fig. 8(c), Fig. 8(d) and Fig. 8(e). Similar to the examples above, the reconstructed image quality becomes degraded as the disk rotates faster. The reconstructed images with our proposed scheme are shown in Fig. 8(f) and Fig. 8(g) when the disk is rotating at 4 and 8 rounds/s. It can be observed that the image fidelity is significantly improved compared with Fig. 8(d) and Fig. 8(e). The estimated parameters with our proposed scheme are shown in Table 3, in comparison with the actual motion parameters.

In Table 1, Table 2 and Table 3, the estimated motion parameters in our proposed scheme are close to the actual ones but some minor error and discrepancy can be found. The estimated values do not fully agree with the actual values due to the following three reasons. First, SGA is not an exhaustive search algorithm and does not attempt every possible value in the search space. An absolute best solution cannot be ensured but a sub-optimal solution close to the optimal solution can be usually obtained. Second, the function $E[{O}_{new}({v}_{x},{v}_{y})]$ (i.e. variance of image intensities in this work) is a very effective image quality metric but may not always fully reflect the true image quality. The metric value may not be exactly at the peak when the estimated parameters are identical to the actual ones. Third, sometimes a set of motion parameters with not fully correct values may yield very similar results as the set of correct motion parameters. For example, in Fig. 9(a) and Fig. 9(b) the same object rotates around different centers with different speeds. The radius is Fig. 9(a) is longer but the rotating speed in Fig. 9(b) is higher. The two set of motion parameters are compared in Fig. 9(c). It can be observed that the object moves both from Point A to Point B with very similar motion path even though the motion parameters are different. In this situation, if the motion parameters in Fig. 9(b) are used as the estimated parameters in our proposed scheme for the moving object in Fig. 9(a), the reconstructed images will still have very satisfactory quality.

#### 3.2. Experimental results

In the optical experiment, a black disk printed with white number digits is rotating around its center at the specified speeds. The optical setup is same as the previous work [35]. Each digit will pass through a fixed imaging window as the disk is rotating. An LED array with 32 × 32 pixel resolution is employed to project Hadamard illumination patterns at a rate of 500000 patterns per second. The rotating speed of the disk is specified as 2 rounds per second, 4 rounds per second and 8 rounds per second. The single-pixel intensity data of three images at different time points are recorded by the SPI system for each rotating speed. The object images are first reconstructed with conventional methods and the reconstructed results [Fig. 10(a)] when the disk is rotating at 2 rounds per second are in good quality. However, when the rotating speed increases, the reconstructed images [Fig. 10(b) and Fig. 10(c)] will be increasingly more degraded. After that, the object images are reconstructed with our proposed scheme based on a rotating motion model shown in Fig. 7 for rotating speeds of 4 rounds per second and 8 rounds per second, shown in Fig. 10(d) and Fig. 10(e). In the optimization, the parameters are the same as the ones in the simulation. The search range of ${P}_{x}$, ${P}_{y}$ and w is [0 32], [-100 −20] and [0 80] (Unit: rounds/s) respectively. Each parameter is represented by 8 bits and the population size is 30 in SGA optimization. The maximum number of iterations in SGA is set to be 10. As a comparison, the average image variances of Fig. 10(b) and Fig. 10(d) are 0.0260 and 0.0354 respectively. The average image variances of Fig. 10(c) and Fig. 10(e) are 0.0252 and 0.0359 respectively. The visual quality of images in Fig. 10(d) and 10(e) evidently outperform those in Fig. 10(b) and 10(c).

The results in optical experiments generally agree with the simulation results in Section 3.1 and the effectiveness of our proposed scheme is verified experimentally. Compared with the simulation results in Fig. 8, the reconstructed images in Fig. 10 are contaminated with more noise, regardless of conventional reconstruction method or our proposed method. In practical optical experiments, many aspects such as light illumination, precision of alignment and accuracy of detector may deviate from the ideal assumptions in the simulation to some extent. As a result, more noise will be introduced in the reconstructed images in addition to the blur and noise due to object motion. The reconstruction results indicate that our proposed scheme has certain robustness under noise conditions.

## 4. Conclusion

In single-pixel imaging (SPI), a two-dimensional image can be captured by a single-pixel detector instead a sensor array device. As a trade-off, a large number of illuminations are required for capturing one single image. Since the projection device can only project a limited number of illumination patterns within a short time, the video frame rate in SPI can be quite low for a highly dynamic object scene. When the target object is moving fast, the reconstructed image in SPI will suffer from severe quality degradation due to motion blur. Motion estimation and image deblurring can be realized to enhance the reconstructed image quality. Different from the motion estimation schemes from multiple neighboring video frames in previous works, we propose a motion estimation and image quality enhancement scheme for one single reconstructed image in SPI for the first time. Our proposed scheme is based on the unique mechanism of SPI and different from conventional single-photograph deblur methods for sensor array cameras. It is assumed that some prior knowledge about the type of object motion is known and a motion model is constructed for the target object. The object is assumed to be static and the illumination patterns are equivalently moving in a reverse manner. The motion parameters in the model are optimized and a reconstructed image with high fidelity can be reconstructed from transformed illumination patterns with the optimal parameters. Experimental results demonstrate that the reconstructed results with our proposed scheme have substantially better image quality than conventional methods.

In future works, the blind motion estimation and quality enhancement for one single reconstructed image in SPI without any prior knowledge of the motion type will be attempted. The type of motion that the object undergoes can be possibly identified with artificial intelligent methods from the recorded single-pixel intensity sequence or low-quality reconstructed image.

## Funding

National Natural Science Foundation of China (61805145, 11774240, 61675016); Natural Science Foundation of Beijing Municipality (4172039); Leading Talents of Guangdong Province Program (00201505); Natural Science Foundation of Guangdong Province (2016A030312010).

## References

**1. **A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. **93**(9), 093602 (2004). [CrossRef] [PubMed]

**2. **J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A **78**(6), 061802 (2008). [CrossRef]

**3. **M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**(2), 83–91 (2008). [CrossRef]

**4. **M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics **13**(1), 13–20 (2019). [CrossRef]

**5. **M. J. Sun and J. M. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors (Basel) **19**(3), 732 (2019). [CrossRef] [PubMed]

**6. **B. I. Erkmen, “Computational ghost imaging for remote sensing,” J. Opt. Soc. Am. A **29**(5), 782–789 (2012). [CrossRef] [PubMed]

**7. **M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. **7**(1), 12010 (2016). [CrossRef] [PubMed]

**8. **E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express **26**(12), 15623–15631 (2018). [CrossRef] [PubMed]

**9. **N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica **1**(5), 285–289 (2014). [CrossRef]

**10. **R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G. Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller, R. H. Hadfield, and M. J. Padgett, “Photon-sparse microscopy: visible light imaging using infrared illumination,” Optica **2**(12), 1049 (2015). [CrossRef]

**11. **Z. Zhang, S. Jiao, M. Yao, X. Li, and J. Zhong, “Secured single-pixel broadcast imaging,” Opt. Express **26**(11), 14578–14591 (2018). [CrossRef] [PubMed]

**12. **S. Jiao, C. Zhou, Y. Shi, W. Zou, and X. Li, “Review on optical image hiding and watermarking techniques,” Opt. Laser Technol. **109**, 370–380 (2019). [CrossRef]

**13. **S. Liansheng, W. Jiahao, T. Ailing, and A. Asundi, “Optical image hiding under framework of computational ghost imaging based on an expansion strategy,” Opt. Express **27**(5), 7213–7225 (2019). [CrossRef] [PubMed]

**14. **C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. **101**(14), 141123 (2012). [CrossRef]

**15. **W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. **6**(1), 26133 (2016). [CrossRef] [PubMed]

**16. **G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express **25**(4), 2998–3005 (2017). [CrossRef] [PubMed]

**17. **J. Peng, M. Yao, J. Cheng, Z. Zhang, S. Li, G. Zheng, and J. Zhong, “Micro-tomography via single-pixel imaging,” Opt. Express **26**(24), 31094–31105 (2018). [CrossRef] [PubMed]

**18. **Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. **6**(1), 6225 (2015). [CrossRef] [PubMed]

**19. **L. Wang and S. Zhao, “Fast reconstructed and high-quality ghost imaging with fast Walsh–Hadamard transform,” Photon. Res. **4**(6), 240–244 (2016). [CrossRef]

**20. **S. Jiao, “Design of optimal illumination patterns in single-pixel imaging using image dictionaries,” arXiv preprint arXiv:1806.01340 (2018).

**21. **W. K. Yu, M. F. Li, X. R. Yao, X. F. Liu, L. A. Wu, and G. J. Zhai, “Adaptive compressive ghost imaging based on wavelet trees and sparse representation,” Opt. Express **22**(6), 7133–7144 (2014). [CrossRef] [PubMed]

**22. **M.-J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express **24**(10), 10476–10485 (2016). [CrossRef] [PubMed]

**23. **M.-J. Sun, W. Chen, T.-F. Liu, and L.-J. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. **9**(5), 1 (2017). [CrossRef]

**24. **L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A **35**(1), 78–87 (2018). [CrossRef] [PubMed]

**25. **D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. **3**(4), e1601782 (2017). [CrossRef] [PubMed]

**26. **V. Kravets and A. Stern, “Video compressive sensing using Russian dolls ordering of Hadamard basis for multi-scale sampling of a scene in motion using a single pixel camera,” In Computational Imaging III. International Society for Optics and Photonics **10669**, 106690 (2018).

**27. **“Digital Image Processing”, R. C. Gonzalez & R. E. Woods, Addison-Wesley Publishing Company Inc. (1992).

**28. **D. S. Biggs and M. Andrews, “Acceleration of iterative image restoration algorithms,” Appl. Opt. **36**(8), 1766–1775 (1997). [CrossRef] [PubMed]

**29. **K. Guo, S. Jiang, and G. Zheng, “Multilayer fluorescence imaging on a single-pixel detector,” Biomed. Opt. Express **7**(7), 2425–2431 (2016). [CrossRef] [PubMed]

**30. **S. Jiao and P. W. M. Tsang, “Enhanced autofocusing scheme in digital holography based on hologram decomposition,” IEEE International Conference on Industrial Informatics (INDIN)**14,**541–545 (2016).

**31. **S. Jiao, P. W. M. Tsang, T. C. Poon, J. P. Liu, W. Zou, and X. Li, “Enhanced autofocusing in optical scanning holography based on hologram decomposition,” IEEE Trans. Industr. Inform. **13**(5), 2455–2463 (2017). [CrossRef]

**32. **N. Sannomiya, H. Iima, and N. Akatsuka, “Genetic algorithm approach to a production ordering problem in an assembly process with constant use of parts,” Int. J. Syst. Sci. **25**(9), 1461–1472 (1994). [CrossRef]

**33. **S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science **220**(4598), 671–680 (1983). [CrossRef] [PubMed]

**34. **A. S. M. Jiao, P. W. M. Tsang, and T. C. Poon, “Restoration of digital off-axis Fresnel hologram by exemplar and search based image inpainting with enhanced computing speed,” Comput. Phys. Commun. **193**, 30–37 (2015). [CrossRef]

**35. **Z. H. Xu, W. Chen, J. Penuelas, M. Padgett, and M. J. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express **26**(3), 2427–2434 (2018). [CrossRef] [PubMed]