Abstract
The information dimension obtained by multispectral ghost imaging is more abundant than in single-band ghost imaging. Existing multispectral ghost imaging systems still meet some shortages, such as complex structure or reconstruction time-consuming. Here, an approach of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging is proposed. It can capture the multispectral image of the target object within one projection cycle with a single-pixel detector while maintaining high imaging efficiency and low time-consuming. The core of the proposed approach is the employed novel encoding strategy which is apt to decode and reconstruct the multispectral image via the Fourier transform. Specifically, cosinusoidal encoding matrices with specific frequency characteristics are fused with the orthogonal Hadamard basis patterns to form the multiplexed structured illumination patterns. A broadband photomultiplier is employed to collect the backscattered signals of the target object interacted by the corresponding structured illumination. The conventional linear algorithm is applied first to recover the mixed grayscale image of the imaging scene. Given the specific frequency distribution of the constructed cosinusoidal encoding matrices, the mixed grayscale image can be converted to the frequency domain for further decoding processing. Then, the pictures of multiple spectral components can be obtained with some manipulations by applying Fourier transform. A series of numerical simulations and experiments verified our proposed approach. The present cosinusoidal encoding multiplexed structured illumination can also be introduced in many other fields of high-dimensional information acquisition, such as high-resolution imaging and polarization ghost imaging.
© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Ghost imaging [1–30] is a novel imaging scheme that has been validated in both quantum [2] and classical systems [3,4]. The light source is split into two beams in a ghost imaging system. One of the beams irradiates the target object and produces transmitted or scattered signals, and then a single-pixel detector would further receive these signals. The other beam does not interact with the target object, and an array detector detects its spatial distribution. The image of the target object can be retrieved by coincidence calculation with the measurements by the two detectors. Later, Shapiro [5] proposed a computational ghost imaging frame and experimental demonstration by Bromberg et al. [6]. Computational ghost imaging significantly reduces the imaging system redundancy and promotes the development of ghost imaging. Until now, ghost imaging has shown its unique advantages in terahertz [7–14], infrared [15–17], X-ray [18,19], and many other fields.
Multispectral or hyperspectral ghost imaging [20–30] is a perfect combination of spectral imaging [31–35] and ghost imaging technology. Compared with single-band ghost imaging, multispectral ghost imaging provides richer information, which may improve accuracy and reliability in object recognition, material analysis, and medical diagnosis. The straightforward strategy for multispectral ghost imaging is accomplished by employing multiple detectors [20] or time-divided detection via a filter wheeling to capture images of various spectral channels, then fusing them into a color image. These strategies are confronted with the dilemma of complex structures and a large amount of data. The methods of multiplexed structured illumination were proposed for multispectral ghost imaging with a broadband single-pixel detector in our previous work [21,22]. The mutually orthogonal binary encoding matrices (corresponding to the red, green, and blue colored information, respectively) and random patterns or orthogonal Hadamard basis patterns are multiplexed to form the encoding structured illumination patterns. Compressed sensing algorithms are employed in the reconstructed procedure to recover the multispectral images. Liu et al. [23] demonstrated a color computational ghost imaging method that can achieve the reconstruction of high-fidelity images at a relatively low sampling rate (0.0625) by using a plug-and-play generalized alternating projection algorithm (PnP-GAP). Or, using training by simulated dataset, Yu et al. [24] presented a color computational ghost imaging strategy, which can eliminate the actual workload of acquiring experimental training datasets and reduce the sampling times for imaging experiments. These technologies can be well worked under sub-Nyquist measurements that reduce the online sampling times but increase the cost of the subsequent reconstruction times. Olivieri et al. [25] proposed a novel approach to deep-subwavelength single-pixel imaging based on nonlinear pattern generation and time-resolved field measurements with the Walsh-Hadamard encoding scheme. They demonstrated the feasibility of single-pixel hyperspectral imaging, which paves the way for a new methodology in two-dimensional material characterization. The orthogonal Fourier base patterns were applied to an actively compressive imaging scheme that encodes, condenses, and recovers spatial, spectral, and 3D information of the object simultaneously with information multiplexing [26]. The strategy of 4-step phase-shifting or 3-step phase-shifting is commonly employed in ghost imaging based on a Fourier base to acquire each complex-valued Fourier coefficient of the imaging object [27], which allows the computational ghost imaging requires more measurements.
Low measurements and reconstruction time-consuming are persistent in pursuing goals of multispectral or color ghost imaging. Here, we demonstrate an approach of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging. It allows multispectral ghost imaging more efficiently. The orthogonal Hadamard basis patterns and cosinusoidal encoding matrices are multiplexed to generate colored illumination patterns. A single-pixel detector is adapted to collect the reflection signal of the object. During recovery, the mixed grayscale image of the imaged object is first recovered by the conventional linear algorithm and then converted to the Fourier space to recombine the information of each channel. Then, the spectral information of different channels is acquired separately and ready to be fused to generate a multispectral image. Numerical simulations and experiments show that our approach can acquire multispectral information efficiently and synchronously, and rapidly reconstruct target images. The organization of the paper is as follows. In Section 2, the imaging and reconstruction methods are introduced. Numerical simulations and experiments are presented in Section 3 to verify the proposed approach. Finally, Section 4 concludes the paper.
2. Imaging and reconstruction methods
Cosinusoidal encoding multiplexed structured illumination ghost imaging technology can image colored objects by encoding different spectral channels independently. Here, we introduce imaging in three spectral channels as an example. Three N × N cosinusoidal encoding matrices are produced that correspond to the red, green, and blue spectral channels, denoted by Fred, Fgreen, and Fblue, respectively. Each matrix can be calculated through Eq. (1):
In summary, the multispectral ghost imaging with cosinusoidal encoding multiplexed structured illumination can be illustrated in Fig. 1. The imaging process is divided into three steps. The first one is the generation of modulated illumination patterns. The Hadamard basis patterns and cosinusoidal encoding matrices are multiplexed to generate colored illumination patterns. The second step is similar to traditional ghost imaging, using the constructed illumination patterns to illuminate the target object and then using the conventional linear algorithm to obtain the mixed grayscale image. The third step is to reconstruct the multispectral image. The mixed grayscale image is transformed to the frequency domain by a two-dimensional Fourier transform. Due to the specific frequency settings of the constructed encoding matrices, the spectrum information of the three channels is separated in the Fourier space and extracted separately by a low-pass filter. Then, we take the two-dimensional inverse Fourier transform of the three extracted spectrogram to get the spectral components. Finally, we can fuse the spectral images of each channel to reconstruct the multispectral image.
3. Simulations and experiments
3.1 Numerical simulation
Numerical simulations are carried out to evaluate the proposed approach. The object is the “onion” image cropped into a 256 × 256 resolution. For the proposed method, how extracting the spectrum information of each channel will influence the quality of image restoration. Here, a Butterworth low-pass filter is applied to extract the spectrum information of each channel. The Butterworth low-pass filter [36] model is as follows:
Where, $R({f_x},{f_y})$ is the spatial frequency, ${R_0}$ and n indicate the filter radius and the order of the filter, respectively. The image restoration quality is greatly influenced by the order and radius of the filter. The peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) [37] are employed to evaluate the reconstructed image quality. Higher PSNR and SSIM scores indicate better quality of the restored image. These metrics are calculated as follows:Figure 2 shows the PSNR and SSIM scores of the reconstructed image under different filter sizes when the 2nd, 3rd, and 4th order Butterworth low-pass filters are selected. The trends of PSNR and SSIM are similar. As the filter radius gradually increases, the obtained low-frequency component of each spectral channel also increases, and the quality of the restored image slowly peaks. The spectral information of each channel will generate crosstalk as the filter size further increases, alongside the rapid drop in the quality of the reconstructed image. The increase in the order of the Butterworth filter will delay the appearance of the inflection point. The inflection points of the SSIM of the 2nd, 3rd, and 4th order filters appear at radii of 40, 55, and 65 pixels. Therefore, to obtain the expected imaging effect, a proper combination between the size and the order of the filter should be found through parameter optimization. The optimized image reconstruction quality is achieved by a filter of 40-70 pixels radius with 3rd or 4th order, bringing the PSNR of the restored image > 30 dB and the SSIM > 0.90.
The reconstruction results using a 4th-order low-pass Butterworth filter of 65 pixels radii are shown in Fig. 3. The PSNR of the reconstructed image reaches 32.2 dB, and the SSIM value surpasses 0.960. The PSNR and SSIM of the multispectral image are averages of the PSNR and SSIM of the three-color spectral channel images. By comparing the brightness of the three spectral components, it can be seen that the red pepper in the middle is bright only in the red channel, the yellow pepper in the lower right corner presents different brightness in the red and green channels, and the white onion in the upper right corner presents bright white in all three channels. On the whole, the spectral information of the target image is reconstructed well.
We further use the evolutionary compressive sensing technique [38] to study the multispectral imaging quality under different measurements. The evolutionary compressive sensing will arrange the detection signal intensity values in descending order and then select the corresponding subset of the Hadamard pattern to reconstruct the image. Figure 4(a-f) show the restoration results using 6.25%, 12.5%, 25%, 50%, 75%, and 100% of the complete set of Hadamard patterns. Even if only the most significant 6.25% of Hadamard patterns were used [Fig. 4(a)], the spectral information of the target image was extracted well. As the fractions of the pattern set increase, so does the quality of the restored multispectral image. The simulation results show that the proposed strategy can reconstruct the multispectral image well. Numerical simulations take 5.45 seconds to restore a 256 × 256 resolution multispectral image from 65536 measurements. As a comparison, we use the random orthogonal encoding multiplexed strategy mentioned in Ref. [22] to simulate the same grayscale imaged object, as shown in Fig. 4(g-l). In Ref. [22], the mutually orthogonal binary encoding matrices and orthogonal Hadamard basis patterns are multiplexed to form the illumination patterns. Compressed sensing algorithms [39] are employed in the reconstructed procedure to recover the multispectral images. The spectral encoded computational ghost imaging method takes 74.02 minutes to restore a 256 × 256 resolution multispectral image from 65536 measurements. Overall, from the recovery quality and time consumption, cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging has advantages over the method mentioned in Ref. [22]. The above simulations were performed in MATLAB R2016a on a Windows 10 laptop equipped with an AMD Ryzen 7 5800H CPU (3.20 GHz) and 16 Gb of RAM.
3.2 Experimental verification
We experimentally study the imaging performance of the proposed approach in the laboratory. As shown in Fig. 5(a), the experimental setup consists of a digital light projector, imaged object, a compound lens, a single-pixel detector, and a computer system (built-in digitizer). The same 256 × 256 resolution cosinusoidal encoding multiplexed structured illumination patterns as the numerical simulation are employed in the experiment. The digital light projector (TOSHIBA TDP-98) projects a pattern onto the object every 0.2 seconds to obtain a high signal-to-noise ratio. The back-reflected light is collected by a compound lens which is composed of three lenses with same diameter of 50.8 mm and same focal length of 100 mm. The equivalent focal length of the compound lens is 36.6 mm and the length of the compound lens is 30 mm. The distance between the object and the first lens of the compound lens is about 1000 mm, and the distance between the last lens of the compound lens and the detector is about 20 mm, which ensures the light collected by the compound lens images on the sensitive surface of the detector. Photo-electrical conversion is realized by the single-pixel detector (Thorlabs PMM02) and then discretized by a digitizer (ADLINK PCI-9816H). In our experiment, the sampling rate is set to 40 kHz. The restoration process is also the same as described in the methods section. The experimental imaging objects are shown in Fig. 5(b) and Fig. 5(c), which are captured by a camera.
The experimental results of the letter object “USTC” are shown in Fig. 6. The imaging resolution is 256 × 256. We first use the evolutionary compressive sensing technique to restore the mixed grayscale image of the imaging object. The Fourier transform is applied to transform the mixed grayscale images to the frequency domain. We move the spectrum information of three channels to the center of the Fourier domain, respectively. And then, a 3rd-order Butterworth filter with a radius of 45 pixels is employed to extract the recombined spectrums of the three spectral channels. So, the inverse Fourier transform is applied to transform the three spectral channels from the corresponding recombined spectrums. The final multispectral image is fused from the three spectral components. Figure 6(a-f) are the reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75%, and 100% of the complete Hadamard pattern set, respectively. We take the restored image at 100% measurements as the ground truth of the imaging object to calculate the image restoration indexes (PSNR and SSIM) at different measurements. The results show that even if only the most significant 6.25% of the complete Hadamard pattern set is used, the color information of the target object can be extracted well. The PSNR and SSIM are 31.33 dB and 0.8738, respectively. As the fractions of the pattern set increase, the quality of the fused multispectral image and the reconstructed spectral component images also increase.
The experimental results of the toy car are shown in Fig. 7. The imaging resolution is 256 × 256. The whole reconstruction process is the same as in the above experiment. As shown in Fig. 7, the current strategy can well capture the complex color object. The minimum PSNR and SSIM are about 30.6 dB and 0.84 at the most significant 6.25% of the complete measurements. The color and reflectivity information of the imaging object has well emerged. As the measurements increase, the quality of the reconstructed spectral images also increases. But the improvement in color information is not remarkable. And we also found that due to the influence of hardware and environmental noise, compared with the simulation results, the reconstructed image of the experiment contains mosaic stripes. Overall, two experimental results imply that the proposed strategy is valid for color computational ghost imaging.
4. Discussions and conclusions
This paper demonstrated an approach of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging. It can capture the multispectral image of the target object within one projection cycle while maintaining high imaging efficiency and low time-consuming. Here, the deterministic orthogonal Hadamard basis patterns and cosinusoidal encoding matrices are multiplexed to generate colored illumination patterns. A broadband single-pixel detector is employed to collect the backscattered signals of the target object interacted by the corresponding structured illumination. The conventional linear algorithm is applied first to recover the mixed grayscale image of the imaging scene. Then, owing to the certain frequency composition of the constructed encoding matrices, the mixed grayscale image can be converted to the frequency domain for further decoding processing. Hence, using Fourier transform, the pictures of multiple spectral components can be obtained with some manipulations. And the final multispectral image can be reconstructed by fusing the spectral images of each channel. A series of numerical simulations and experiments verified the proposed approach.
The present work improves the acquisition efficiency of multispectral imaging. This strategy can flexibly structure the encoding matrices according to the amounts of spectral channels and achieve high-quality restoration of multispectral images. More importantly, the restoration process of multispectral images is more advanced. The methods of digital signal processing, including the Fourier transform theory, are applied to demodulate and reconstruct the pictures of the spectral component. These operations accelerate the reconstruction efficiency of multispectral images. The time-consuming is approximately 5.45 seconds to restore a 256×256 resolution multispectral image from 65536 measurements. The time-consuming is about 74.02 minutes to get the 256 × 256 resolution multispectral image for the same imaging object by applying the compressed sensing (CS) algorithm [39] in our previous work [22]. In addition, to our knowledge, CS is an optimization algorithm that will be more time-consuming if the imaging resolution increases. Even worse, CS (such as TVAL3 [40]) may not properly work when the spatial resolution is too large. However, the proposed approach may have some superiority when the imaging resolution increases due to the higher energy concentration in the low-frequency region. The present cosinusoidal encoding multiplexed structured illumination can also be introduced in many other fields of high-dimensional information acquisition, such as high-resolution imaging and polarization ghost imaging. Of course, the experimental results are not satisfying enough compared with the simulation ones. The restoration results show a mosaic structure with color information, which might be affected by environmental noise, object reflectivity, and hardware performance. Future improvements and research shall consider these factors to achieve higher quality imaging.
Funding
Open Project of Advanced Laser Technology Laboratory of Anhui Province (NO. AHL2021ZR01); Foundation of Key Laboratory of Science and Technology Innovation of Chinese Academy of Sciences (NO. CXJJ-20S028); National Natural Science Foundation of China (NO. U20A20214); Youth Innovation Promotion Association of the Chinese Academy of Sciences (NO. 2020438).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. D. Strekalov, A. Sergienko, D. Klyshko, and Y. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74(18), 3600–3603 (1995). [CrossRef]
2. T. B. Pittman, Y. Shih, D. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]
3. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““Two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]
4. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]
5. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]
6. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]
7. R. I. Stantchev, B. Sun, S. M. Hornett, P. A. Hobson, G. M. Gibson, M. J. Padgett, and E. Hendry, “Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. 2(6), e1600190 (2016). [CrossRef]
8. T. Vasile, V. Damian, D. Coltuc, and M. Petrovici, “Single pixel sensing for THz laser beam profiler based on Hadamard Transform,” Opt. Laser Technol. 79, 173–178 (2016). [CrossRef]
9. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7(2), 186–191 (2020). [CrossRef]
10. L. Olivieri, J. S. Totero Gongora, A. Pasquazi, and M. Peccianti, “Time-resolved nonlinear ghost imaging,” ACS Photonics 5(8), 3379–3388 (2018). [CrossRef]
11. J. S. Totero Gongora, L. Olivieri, L. Peters, J. Tunesi, V. Cecconi, A. Cutrona, R. Tucker, V. Kumar, A. Pasquazi, and M. Peccianti, “Route to intelligent imaging reconstruction via terahertz nonlinear ghost imaging,” Micromachines 11(5), 521 (2020). [CrossRef]
12. S.-C. Chen, Z. Feng, J. Li, W. Tan, L.-H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z.-H. Zhai, Z.-R. Li, C.-W. Qiu, X.-C. Zhang, and L.-G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light: Sci. Appl. 9(1), 99 (2020). [CrossRef]
13. L. Leibov, A. Ismagilov, V. Zalipaev, B. Nasedkin, Y. Grachev, N. Petrov, and A. Tcypkin, “Speckle patterns formed by broadband terahertz radiation and their applications for ghost imaging,” Sci. Rep. 11(1), 20071 (2021). [CrossRef]
14. V. Cecconi, V. Kumar, A. Pasquazi, J. S. Totero Gongora, and M. Peccianti, “Nonlinear field-control of terahertz waves in random media for spatiotemporal focusing,” Open Res Europe 2, 32 (2022). [CrossRef]
15. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]
16. M. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]
17. H. Zhao, P. Li, Y. Ma, S. Jiang, and B. Sun, “3D single-pixel imaging at the near-infrared wave band,” Appl. Opt. 61(13), 3845–3849 (2022). [CrossRef]
18. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]
19. A.-X. Zhang, Y.-H. He, L.-A. Wu, L.-M. Chen, and B.-B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]
20. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]
21. J. Huang and D. Shi, “Multispectral computational ghost imaging with multiplexed illumination,” J. Opt. 19(7), 075701 (2017). [CrossRef]
22. J. Huang, D. Shi, W. Meng, L. Zha, K. Yuan, S. Hu, and Y. Wang, “Spectral encoded computational ghost imaging,” Opt. Commun. 474, 126105 (2020). [CrossRef]
23. S. Liu, Q. Li, H. Wu, and X. Meng, “Color computational ghost imaging based on a plug-and-play generalized alternating projection,” Opt. Express 30(11), 18364–18373 (2022). [CrossRef]
24. Z. Yu, Y. Liu, J. Li, X. Bai, Z. Yang, Y. Ni, and X. Zhou, “Color computational ghost imaging by deep learning based on simulation data training,” Appl. Opt. 61(4), 1022–1029 (2022). [CrossRef]
25. L. Olivieri, J. S. T. Gongora, V. Cecconi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral Single-Pixel Reconstruction at THz Frequencies using Time-Resolved Nonlinear Ghost Imaging,” in 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC, 2019), 1-1.
26. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]
27. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]
28. B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64(3), 259–264 (2017). [CrossRef]
29. P. Wang and R. Menon, “Computational multispectral video imaging,” J. Opt. Soc. Am. A 35(1), 189–199 (2018). [CrossRef]
30. L. Wang and S. Zhao, “Full color single pixel imaging by using multiple input single output technology,” Opt. Express 29(15), 24486–24499 (2021). [CrossRef]
31. Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006). [CrossRef]
32. L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 1–7 (2016). [CrossRef]
33. C. Hu, S. Yang, M. Chen, and H. Chen, “Quadrature multiplexed structured illumination imaging,” IEEE Photonics J. 12(2), 1–8 (2020). [CrossRef]
34. K. Dorozynska and E. Kristensson, “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Opt. Express 25(15), 17211–17226 (2017). [CrossRef]
35. K. Dorozynska, V. Kornienko, M. Aldén, and E. Kristensson, “A versatile, low-cost, snapshot multidimensional imaging approach based on structured light,” Opt. Express 28(7), 9572–9586 (2020). [CrossRef]
36. S. Butterworth, “On the theory of filter amplifiers,” Wireless Engineer 7, 536–541 (1930).
37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]
38. M. J. Sun, L. T. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 1–7 (2017). [CrossRef]
39. J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. on Image Process. 23(8), 3336–3351 (2014). [CrossRef]
40. C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Comput. Optim. Appl. 56(3), 507–530 (2013). [CrossRef]