Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep speckle reassignment: towards bootstrapped imaging in complex scattering states with limited speckle grains

Open Access Open Access

Abstract

Optical imaging through scattering media is a practical challenge with crucial applications in many fields. Many computational imaging methods have been designed for object reconstruction through opaque scattering layers, and remarkable recovery results have been demonstrated in the physical models or learning models. However, most of the imaging approaches are dependent on relatively ideal states with a sufficient number of speckle grains and adequate data volume. Here, the in-depth information with limited speckle grains has been unearthed with speckle reassignment and a bootstrapped imaging method is proposed for reconstruction in complex scattering states. Benefiting from the bootstrap priors-informed data augmentation strategy with a limited training dataset, the validity of the physics-aware learning method has been demonstrated and the high-fidelity reconstruction results through unknown diffusers are obtained. This bootstrapped imaging method with limited speckle grains broadens the way to highly scalable imaging in complex scattering scenes and gives a heuristic reference to practical imaging problems.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Diffraction-limited optical setup is a crucial approach in many imaging fields of scientific research [1]. However, the object information is seriously degraded after modulated by unknown scattering media (e.g., medical observations through biological samples and remote sensing through turbulent atmosphere), which diffuses optical beams into speckles and brings interference in message transmission [25]. Several imaging approaches are employed to exact the useable information through opaque scattering layers via advanced techniques, such as time-gated methods [6,7], wave-front-shaping methods [8,9], transmission matrix (TM) methods [10,11], single-pixel imaging methods [12,13], and point spread function (PSF) methods [14,15]. Much progress has been made by the methods listed here with ‘ballistic‘ light or invasive prior. Speckle-correlation methods based on memory effect (ME) [16,17] and fluorescence imaging methods based on matrix factorization [18,19] have been reported for non-invasive imaging through scattering media. ME-based correlation methods have a limitation on the field of view (FOV), and the non-invasive fluorescence imaging methods specify the scattering media is stable. Meanwhile, the ME-based correlation methods have limited performance with restricted autocorrelation peak signal-to-noise ratio (PSNR) and residual statistical speckle noise [17], and complex gray objects are also influenced by the recovery capability of phase retrieval algorithms. For practical scenes, the quality of captured speckles are influenced by many factors, e.g., the exposure time, the imaging sensor sizes, and variable scattering states. Wang et al. [20] proposed a method to improve the autocorrelation quality by capturing multiple frames of speckle patterns and piecing them together. The speckles stitching method is an efficient approach to enhance the recovery quality via trading the sampling ratio for spatial resolution.

With the advent of the powerful data-mining capability of neural networks and advanced optoelectronic techniques, the learning approaches have shown great potential in optical imaging fields [21,22]. The deep learning (DL) methods have achieved great success in image restoration through scattering media with different conditions [2326], e.g., different scattering media types [27,28], different distances between the object and diffuser [29,30] and different illumination light sources [31,32]. In particular, the DL methods have been proposed to learn the forward propagation path of the light through the system and replace the fixed TM [33]. Most of the DL methods for imaging through diffusers use speckles directly, which results in the limited generalization capability in unknown scattering states. Li et al. first proposed a DL method in scalable imaging for unknown diffusers and further develop an interpretable neural network to extend the depth of field, which requires the unknown diffusers to have similar statistical properties and the recovery objects are simple and sparse [34,35]. Therefore, the traditional speckle-correlation methods and DL methods are correspondingly plagued with the reconstructed image quality and the generalization capability.

More physics priors have been introduced to the learning frameworks and extraordinary reconstruction results have been obtained [36,37]. However, the high-fidelity reconstructions rely on enough training diffusers or larger region speckles for object recovery [28,32]. A better autocorrelation PSNR via the traditional speckle-correlation method also needs enough frames of speckle patterns [20]. All the unstable fluctuation factors will affect the quality and instability of the collected speckles, which brings more challenges to both traditional speckle-correlation methods and DL methods. For the practical scattering scenes (e.g., over-exposure and under-exposure states), it is difficult for the optical imaging system to record comprehensive speckles and finite scattering information. Therefore, how to make full use of the limited speckle grains and make sense of the speckle characteristics are crucial to the application of practical scattering scenes.

In this paper, for complex scattering states with limited speckle grains, a bootstrapped imaging method is proposed to reconstruct hidden objects through unknown diffusers. Even utilizing a limited size of the speckle pattern, multiple speckle patterns can be synthesized for improving the PSNR of autocorrelation and data augmentation, which can not only improve the performance of the traditional speckle-correlation method but also promote the learning method for better capability in reconstruction and generalization task. The in-depth scattering characteristics and information are explored and verified via the traditional speckle-correlation method and the flexible DL approaches. In the case of utilizing the speckle reassignment priors to improve the PSNR of autocorrelation and make the data augmentation, the bootstrapped imaging methods can significantly enhance the imaging capability which is universally applicable to the traditional physical methods and the DL methods.

2. Methods

2.1 Physical basic

In 1988 Feng et al. theoretically proposed and experimentally demonstrated the angular memory effect [38,39]. Within the ME range discussed, a small angular rotation in the input beam then leads to a rotation of the output speckle pattern by the same angle, and correspondingly to a shift at a distance from the media [40,41]. The collected patterns of scattered light through opaque diffusive layers are invariant to small tilts or shifts in the incident wave-front of light, and the outgoing light field still retains the object information within the ME range [41]. For the spatially incoherent imaging system, taking the translational invariant theory and using the convolution theorem yields, the collected scattering patterns can be calculated by

$$I=O*S,$$
where $*$ is a convolution operator, $I$ denotes the speckle pattern, $O$ denotes the hidden object, and $S$ stands for the PSF. By utilizing the convolution theorem [17], the autocorrelation of objects can be extracted by the following equation:
$$I\star I=(O*S)\star(O*S)=(O\star O)*(S\star S),$$
where $\star$ denotes the autocorrelation operation. From Eq. (1), the autocorrelation of the scattering spot can be described as the convolution between the object autocorrelation behind the medium and the PSF autocorrelation of the optical scattering system, and since the PSF autocorrelation can be approximated as a $\delta$ function, Eq. (2) can be further reduced to
$$I\star I=(O\star O)+C,$$
where $C$ is the background term, which generates from the autocorrelation process [17]. Within the ME range, the autocorrelation of speckle pattern can be approximated as the autocorrelation of the object. To assure successful reconstruction, the measured autocorrelation PSNR is an important aspect, which is averaged over an infinite number of speckle grains. Next, the object can be retrieved from its autocorrelation, which is calculated by an inverse 2D Fourier transform of its energy spectrum [17]:
$$R(x,y)=I(x,y)\star I(x,y) =FFT^{{-}1} \{|FFT\{I(x,y)\}|^{2}\}.$$

It is valid to piece numerous frame patterns with limited speckle grains, which requires a time-varying dynamic diffuser and multiple camera acquisitions [20]. By utilizing the random modulated process of light scattering, multiple speckle patterns can be synthesized via the bootstrapped imaging method.

With an ideal pinhole on the optical axis of the object plane as the corresponding point of the object plane, and utilizing the Fresnel diffraction theory, the field on the front surface of the scattering medium $U_{TM}$ can be expressed as [1,42]:

$$U_{TM}(a_{\xi},b_{\eta})=\frac{e^{j2\pi u/\lambda}}{j\lambda^2 u}e^{\frac{j\pi}{\lambda u}(a_{\xi}^2+b_{\eta}^2)},$$
where $(a_{\xi },b_{\eta })$ is the 2D coordinates corresponding to the diffuser plane, and $u$ is the distance from the object to diffuser. For the scattering modulation process, the scattering diffuser can be regarded as a 2D random phase disturbance, $TM \begin {bmatrix} (a_{1},b_{1}) & \cdots & (a_{1},b_{n}) \\ \vdots & \ddots & \vdots \\ (a_{m},b_{1}) & \cdots & (a_{m},b_{n}) \end {bmatrix}$ , is introduced into the modulation model and the $(a_{\xi },b_{\eta })$ can be indexed as a component of the TM. Using the scalar diffraction theory to characterize the scattering process, the light field on the first sensor plane can be expressed as:
$$\begin{aligned} h_v (a_{\xi},b_{\eta}) &= \frac{e^{j2\pi u/\lambda v}}{-\lambda^2 u}\sum_{\xi}^{m}\sum_{\eta}^{n}TM(a_{\xi},b_{\eta})\\ &\quad \cdot \frac{e^{\frac{j\pi}{\lambda u}(a_{\xi}^2+b_{\eta}^2)} \cdot e^{\frac{j2\pi}{\lambda}\sqrt{v^2 + (a_{\xi}-a)^2 + (b_{\eta}-b)^2}}}{v^2 + (a_{\xi}-a)^2 + (b_{\eta}-b)^2}, \end{aligned}$$
where $v$ is the propagation distance between diffuser and sensor. Therefore, the PSF at the image plane is expressed as the square of the received light field amplitude, and the measured intensity can be expressed as:
$$\begin{aligned} I_v &= O* |h_v (a_{\xi},b_{\eta})|^2 \\ &= O* \bigg| \frac{e^{j2\pi u/\lambda v}}{-\lambda^2 u}\sum_{\xi}^{m}\sum_{\eta}^{n}TM(a_{\xi},b_{\eta}) \\ & \quad \cdot \frac{e^{\frac{j\pi}{\lambda u}(a_{\xi}^2+b_{\eta}^2)} \cdot e^{\frac{j2\pi}{\lambda}\sqrt{v^2 + (a_{\xi}-a)^2 + (b_{\eta}-b)^2}}}{v^2 + (a_{\xi}-a)^2 + (b_{\eta}-b)^2}\bigg|^2\\ & =O*S_v (a_{\xi},b_{\eta}) . \end{aligned}$$

The speckle reassignment can be regarded as modulated with the reassignment of the $TM(a_{\xi },b_{\eta })$ via Eq. (7). Different arrangements of the $TM(a_{\xi },b_{\eta })$ can characterize different modulation processes, which can model the degrade process well with single-shot speckles of the limited grains. Therefore, Eq. (3) can be modified as follow:

$$I\star I=(O\star O)+C (a_{\xi},b_{\eta}),$$
where $C(a_{\xi },b_{\eta })$ is the different background items with the ressigned $TM(a_{\xi },b_{\eta })$. The different background terms can be used to descript different scattering processes with the resigned speckle patterns. As shown in Fig. 1(a), the raw speckles are divided into nine sub-regions and recombined after reassignment. Then, the autocorrelation is correspondingly calculated. From Fig. 1(b), the autocorrelations of reassignment speckles have similar main structures in the green dashed box and the background terms have an obvious difference in the red dashed box, which can indicate the different modulation processes. The background terms of different autocorrelations have similar noise levels, which cannot be used for imaging enhancement with a single speckle reassignment. The different autocorrelation background terms of reassigned speckle patterns are consistent with the item $C (a_{\xi },b_{\eta })$ of Eq. (8).

 figure: Fig. 1.

Fig. 1. (a) Speckle recombination after reassignment and corresponding autocorrelation. (b) The normalized intensity values of the white dash lines of (a). (c) The autocorrelations of GT and superposition with different times of reassignment, the images below are the corresponding reconstruction. N, the reassignment number with different bootstrapped results. (d) The evolution of the CC and PSNR of different N. (e) The normalized intensity values of the white dash lines of (c). The evolution of the CC and PSNR with GT in (a) and (c) are also presented.

Download Full Size | PDF

For this case, the number of recombined forms with nine sub-regions is 362880 (9!=362880). As shown in Fig. 1(c), by utilizing the random background terms of reassigned speckle autocorrelation, the quality of autocorrelation can be improved by superimposing with different reassigned autocorrelation. Compared with an ideal condition with an infinite number of speckles, the improvement of the autocorrelation quality is finite with limited speckle grains. The correlation coefficient (CC) and PSNR between different bootstrapped results and ground truth (GT) is presented in Fig. 1(d), and the autocorrelation enhancement with the CC and PSNR curve can be intuitively observed. From Fig. 1(e), the autocorrelations of superposition with different times of reassignment, which have similar main structures (blue dashed box) and the reduced background interference terms (pink dashed box).

The difference in autocorrelation background terms increases as the difference in speckle reassignment increases. Therefore, for a fixed value of N, it is preferable to select reassigned speckle patterns using an equal interval sampling strategy, i.e., the sampling step is the total recombined number divided by $N$. By utilizing the bootstrapped enhancement step, the useful information of the hidden object is retained and the useless artifact noise is diminished. Compared to calculating the limited speckle grains only once, the bootstrapped enhancement method can suppress the background noise and improve the PSNR of the autocorrelation, which is crucial in the traditional ME-based correlation method. More comparison results with or without the bootstrapped enhancement are presented in Fig. 2. Even with limited speckle grains (255$\times$255 pixels), better imaging performance can be obtained with more accurate reconstruction results and less background artifact interference.

 figure: Fig. 2.

Fig. 2. The superposition with different times of reassignment and the corresponding reconstruction. N, the reassignment number with different bootstrapped results. The evolution of the CC and PSNR with GT are also presented.

Download Full Size | PDF

Since the existing speckles are in-depth used, there is no additional systematic noise introduced by the need to collect multiple frame patterns for a long period of time. By directly using the traditional phase retrieval algorithm [17,43], the corresponding reconstructions are presented. Comparing the original results without bootstrapped enhancement, the results of $N$=256 have better imaging quality and fewer noise artifacts [17]. Therefore, the advances of using speckle reassignment are proved with the theoretical analysis and experimental demonstration. Furthermore, more scattering states can be synthesized by utilizing the limited speckle grains, and the in-depth physic priors can be introduced into the learning framework to improve the reconstruction capability and expand the application scenes, e.g., more complex objects, high dynamic range imaging, multispectral imaging.

2.2 Framework of physics-aware learning

The difference of the background terms can indicate the different degrade processes in the light scattering, which is benefit for improving generalization capability through unknown scattering media via speckle redundancy [32]. However, more speckle sub-regions are needed for utilizing the speckle redundancy prior, which does not applicable to the information-limited scattering scenes. By utilizing speckle reassignment with limited speckle grains, speckles can be synthesized to produce more autocorrelations with different background terms. For a proven DL method, gathering more data are mainly determined the performance of the algorithm, and it is often much better to gather more data than to improve the learning algorithm [44]. As shown in Fig. 3, the speckle bootstrapped step is employed for data augmentation, which is a feasible approach to accurately reconstruct hidden objects with a limited speckle regions. The reassigned speckles are processed by the speckle-correlation module to generate the autocorrelations with different background terms.

 figure: Fig. 3.

Fig. 3. Schematic of the physics-aware learning framework via deep speckle reassignment. (a) In-depth physics-aware pre-processing step. (b) Neural network post-processing step.

Download Full Size | PDF

The in-depth physics priors with the speckle bootstrapped step and speckle-correlation step can be introduced into the learning framework. The neural network post-processing step is employed for mining the intrinsic association and the objects reconstruction. To demonstrate the effectiveness of the speckle bootstrapped step and objectively compare with the previous work, a simple convolution neural network (CNN) of U-net type is selected for post-processing reconstruction [45]. The loss function that we consider in the training process is combined with a negative Pearson coefficient (NPCC) loss and a mean square error (MSE) loss. The loss function can be calculated as follows:

$$Loss=Loss_{NPCC} + Loss_{MSE},$$
$$Loss_{NPCC} =\frac{-1 \times \sum_{x=1}^{w}\sum_{y=1}^{h}[i(x,y)-\hat{i}][I(x,y)-\hat{I}]}{\sqrt{\sum_{x=1}^{w}\sum_{y=1}^{h}[i(x,y)-\hat{i} ]^2}\sqrt{\sum_{x=1}^{w}\sum_{y=1}^{h}[I(x,y)-\hat{I}]^2}},$$
$$Loss_{MSE} =\sum_{x=1}^{w}\sum_{y=1}^{h}|\tilde{i}(x,y)-I(x,y)|^2,$$
where $\hat {I}$ and $\hat {i}$ is the mean value of the object ground truth $I$ and the DNN output $i$, and $\tilde {i}$ is the normalization process for $i$. The combination loss function has a good capability for reconstructing objects with different complexity and sparsity through different scattering media. To train the bootstrapped enhanced learning models, an Adam optimizer is selected to optimize the CNN weights in the training process. The learning rate starts from $1\times 10^{-4}$ in the first 200 epochs to $1\times 10^{-5}$ for the final 200 epochs. The CNN is performed on PyTorch 1.4.0 with a Titan RTX graphics processing unit and i9-9940X CPU under Ubuntu 16.04.

2.3 Experimental configuration and data groupings

The optical configuration that we designed in bootstrapped imaging method is depicted in Fig. 4. A pseudo-thermal spatially-incoherent source is employed as the illumination light which includes a 640 nm laser and a rotating diffuser (RD). The grayscale objects are displayed on the digital micro-mirror device (DMD) (pixel count: 1024$\times$768, pixel size: 13.68 $\mu$m). A total internal reflection (TIR) prism is employed to fold optical path for conveniently collecting the speckle patterns. An industrial complementary metal-oxide-semiconductor (CMOS) camera (acA1920-155um, Basler AG, Ahrensburg, Germany) is selected to collect the speckle patterns. Five neutral density filters (NDF) with different transmissions (1%, 3%, 25%, 40% and 65%) are assembled in an adjustable setup. With a fixed exposure time, different NDFs can be utilized to generate different dynamic range speckles. Three different ground glasses are employed as the scattering media, which include one 220 grit scatter (D1) produced by Thorlabs, one 120 grit scatter (D2) produced by Thorlabs and one 220 grit scatter (D3) produced by Edmund. The distance between the object and the diffuser is 400 mm, and the distance between the diffuser and the CMOS is 80 mm. The diameter of the iris1 is 8 mm and the diameter of the iris2 combined with the collimating lens (CL) is 10 mm.

 figure: Fig. 4.

Fig. 4. Experimental configuration for the bootstrapped imaging through unknown diffusers with limited speckle grains. (a)-(c) Collected speckle patterns with different response via an adjustable NDFs, and the speckles within the blue dashed line is the useful area which contain the effective information.

Download Full Size | PDF

As for the processing of the speckle patterns, we take the 255$\times$255 speckles pixels from the spackle pattern to calculate the autocorrelation and crop the center to 128$\times$128 pixels autocorrelation pattern as the network input. The objects with different complexities and data characters are respectively selected from several different datasets, including the MNIST [46], FEI face dataset [47], Coco dataset [48], and ALOI object [49]. Some of the speckles are selected from the previous work, to demonstrate the progress of the bootstrapped imaging method with our previous work, the same dataset is employed for recovery through unknown scattering media. The data can be roughly divided into five groups:

Group 1: The complex objects are selected from the FEI faces within the ME range [28], which have different physics-aware pre-process steps. The training dataset is collected from D1 and D2 is employed as the unknown scattering media. The objects are the first 360 faces as the seen objects to firstly verify the effectiveness and necessity of the bootstrapped step.

Group 2: The objects are selected from the MNIST dataset which includes 900 single characters. The training dataset is collected from D1 and D2 is employed as the unknown scattering media. The exposure times for collecting the speckle patterns are the same. The speckles with different exposure states are collected via five different NDFs, which can be employed to verify the generalization capability in an unknown dynamic range.

Group 3: The objects are randomly selected from several types from the Coco dataset, which has more complex structures and is related to the practical scenes. The training dataset is collected from D1 with the seen object (the first 650 objects), D2 and D3 are employed as the unknown scattering media with all objects (a total of 740 objects).

Group 4: The objects are selected from the ALOI dataset and collected from the color projector system [32]. The training dataset is collected from D1, and D2 is employed as the unknown scattering media. 450 objects are selected as the seen objects to verify the effectiveness of the bootstrapped step in the color imaging system.

Group 5: The objects are the single characters extending the FOV to 1.2 times ME range. The dataset and grouping information is similar to Group 1.

3. Experimental results and analysis

To demonstrate the necessity of the bootstrapped enhancement learning strategy via deep speckle reassignment, comprehensive experiments are provided and analyzed. Despite presenting a large number of reconstruction results, the mean absolute error (MAE), PSNR, and structural similarity index (SSIM) are also employed to evaluate the generalization results [50], which is helpful for accurate analysis.

Firstly, the complex FEI human faces objects [47] of Group 1 are employed to verify the effectiveness of the bootstrapped DL imaging method. Utilizing more training diffusers [28]and multiple speckle sub-regions are essential for reliably reconstructing complex objects [32], which is not applicable for scenes with limited speckle grains. The comparison results without or with the bootstrapped step are presented in Fig. 5. With limited speckle regions, the testing results without the bootstrapped step are unreliable and indistinguishable. By utilizing the speckle reassignment step for data augmentation, the high-fidelity reconstruction can be obtained and even the micro-expressions can be identified. Furthermore, better imaging performance can be obtained by increasing the number of bootstrapped steps (N). By using more speckles for the speckle-correlation step, the imaging performance is correspondingly improved. However, without multiple autocorrelations with different background items, it is hard to character the random modulated processes and obtain reliable generalization capability in imaging through unknown scattering media [32]. Therefore, combining with the bootstrapped priors of deep speckle reassignment, more information about the generalization scenes can be synthesized and the robust imaging performance can be achieved with limited speckles, which demonstrates the feasibility of the recovery of the complex objects with limited speckle grains.

 figure: Fig. 5.

Fig. 5. Comparison results without or with the bootstrapped step.

Download Full Size | PDF

The perceptual units of the camera are influenced by the practical scenes, e.g., the states of the scattering media and the exposure of the CMOS. As shown in Fig. 4, the speckle patterns have different statistical properties with different transmission NDFs. For the normal exposure, the speckles are relatively uniformly distributed. However, the speckles with under-exposure and over-exposure states have different distribution states. For the under-exposure states, the response of the CMOS is relatively weaker and the information about the hidden object is mainly concentrated in the center part. And the autocorrelation of under-exposure speckles has poor contrast and the grayscale transformation pre-processing is beneficial for the bootstrapped DL method. Metzler et al. have demonstrated the reconstruction in the under-exposure state can be greatly improved by using the learning method [51]. As for the over-exposure states, the useful speckles are mainly concentrated in the edge region of the collected patterns. By utilizing the speckle redundancy, the object information can be retrieved with the sub-region speckles. And then, the speckles can be stitched and reassigned for imaging through unknown scattering media. Therefore, the flexible physics-aware DL framework is essential for complex scattering scenes. The testing results for generalization reconstruction through unknown scattering media and unknown NDFs are corresponding presented in Fig. 6, and the objective indicators are also provided in Table 1. By utilizing the speckle reassignment priors and speckle-correlation process step, utilizing limited speckle regions of a fixed exposure state, reliable reconstruction results can be obtained with unknown scattering diffusers and unknown exposure states. Furthermore, the closer the dynamic range between training state and testing state, the better imaging performance can be obtained. Selecting the normal exposure state with 25% transmission NDF as the training state can obtain better imaging performance. Much more noise and disturbance will be introduced in the imaging system with over-exposure state, which brings more noise and artifacts to the reconstruction results. The imaging performance can be analyzed in the Table 1 of the objective evaluation, which indicates the imaging performance with quantitative indicators and the same conclusion can be drawn from the visual imaging results. Based on the bootstrapped priors of deep speckle reassignment, the dynamic response range of traditional DL approaches can be greatly improved, and the capability to reconstruct through complex scattering states can be further greatly expanded.

 figure: Fig. 6.

Fig. 6. Generalization reconstruction results with unknown diffuser (D2) and unknown exposure states via the proposed method and directly imaging with raw speckles with known diffuser (D1) and unknown exposure states (N=4).

Download Full Size | PDF

Tables Icon

Table 1. Quantitative evaluation results of generalization results

The purely data-driven DL method has a large dependence on the variability of the dataset, which results the limited imaging performance in complex scattering states and structural characteristics of the objects. Based on the exploration in combining the physics priors and learning approach, it is possible to reconstruct more complex objects in unknown scattering conditions [28]. As shown in Fig. 7, the practical scenes are selected from Coco dataset to verify the generalization capability in unknown scattering states. Even with complex practical scene data and limited speckles, the seen object scenes can be accurately reconstructed via the bootstrapped imaging method. And the unseen scenes can be reconstructed and classified roughly and reliably, such as the approximate outlines of airplanes and directional road signs.

 figure: Fig. 7.

Fig. 7. Testing results for generalization reconstruction of Group 3 (N=6).

Download Full Size | PDF

The bootstrapped imaging method can also be used in the generalization task of wide-spectrum imaging. As depicted in Fig. 8(a), color speckle patterns can be separated into three independent channels for bootstrapped step and then using the speckle-correlation pre-processing step for color scalable imaging through unknown scattering media. For color speckles, each independent channel can be separated into four sub-regions for bootstrap-informed data augmentation, and the number of recombined forms is 13824 ($4!\times 4!\times 4!$=13824). Therefore, even using 256$\times$256 sub-regions, stable and reliable reconstruction results can be obtained for color objects and complex ALOI datasets. Compared with using more scattering diffusers [28] and more sub-regions with the speckle redundancy [32], the bootstrapped imaging method is more efficient and robust for dynamic scattering scenes. Furthermore, the proposed method is also applicable for object recovery exceeding the ME range. As shown in Fig. 9, using the proposed method with a limited pixel area of 256$\times$256, the objects can still be reconstructed relatively reliably via the bootstrapped imaging method. The reconstructed results are clearer and more accurate compared to the original method, and the SSIM is improved from 0.3247 to 0.7982. Therefore, the bootstrapped imaging method has better generalization capability and robust adaptability for many complex scattering scenes, which significantly benefits its practical applications.

 figure: Fig. 8.

Fig. 8. (a) Wide-spectrum bootstrapped enhancement with speckle reassignment and corresponding autocorrelation. (b) Testing results for generalization reconstruction of Group 4 (N=12).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Comparison results without or with the bootstrapped enhancement step for imaging exceeding ME range (N=6).

Download Full Size | PDF

4. Discussion

According to the theoretical analysis and experimental demonstration, several discussions are clarified as follows:

  • (i) For the complex scattering scenes, a bootstrapped imaging method is proposed for scalable imaging through unknown diffusers. Even utilizing limited speckle sub-regions, autocorrelations with higher PSNR can be superimposed to obtain, which has been demonstrated by the traditional speckle-correlation method for imaging quality. Furthermore, the bootstrapped step is an efficient way to synthesize much more autocorrelations with different background terms. The physics-based data augmentation is crucial for the learning approach to improve the imaging performance and computational efficiency, which paves the way to a flexible learning framework for imaging through scattering media.
  • (ii) The theory of speckle reassignment can be introduced in different scattering cases and as a plug-and-play prior to efficient imaging problems. Particularly for the DL methods, a physics-based data augmentation strategy is proposed by utilizing the bootstrap prior with deep speckle reassignment. The generalization and reconstruction capability of the imaging performance can be significantly improved via the deep speckle reassignment pre-processing step. The robust and extraordinary reconstruction capability of the bootstrapped imaging method has been verified by several system experiments, e.g., complex object recovery, dynamic range object recovery, wide-spectrum object recovery, and extensive FOV object recovery.
  • (iii) It should be noted that the bootstrapped imaging method is based on the speckle-correlation physical model, and the reconstruction capability for more complex objects should be further improved. The proposed method can also be applied to multiple scattering environments, e.g., imaging through dynamic fibers [52] and around corners [53], that the imaging process can be modeled by the speckle-correlation. Meanwhile, the improvement of the autocorrelation resolution via a stochastic scattering localization imaging technique [54], the improvement limitation with different numbers of speckle sub-regions, and the relationship between object size and reassigned sub-region size need to be further explored in future work. Furthermore, an accurate synthetic model could be formulated for training data generation [51], and more physical constraints [55] can also be introduced in the bootstrapped imaging frameworks.

5. Conclusion

In this paper, the deeper physics prior has been explored by speckle reassignment and a bootstrapped imaging method is proposed for imaging in complex scattering states with limited speckle grains. Even utilizing limited training dataset, the robust imaging performance and great generalization capability can be obtained via the speckle-reassigned enhancement, which applicable to the complex scattering conditions and more closer to the practical application scenes. This bootstrapped imaging method gives an enlightening reference for flexible learning approaches with physics-based data augmentation and paves the way to solve the ill-posed inverse problems with non-cooperative conditions.

Funding

National Natural Science Foundation of China (61971227, 62031018, 62101255); Nanjing Medical Science and Technique Development Foundation (QRX17057, ZKX19018); China Postdoctoral Science Foundation (2019M661804, 2021M701721, 2022T150317); Postdoctoral Science Foundation of Jiangsu Province (2019k060); Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX22_0415).

Acknowledgments

We thank Chenyin Zhou, Kaixuan Bai, Haocun Qi and Chutian Wang for technical supports and experimental discussion.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company, 2007).

2. S. Yoon, M. Kim, M. Jang, Y. Choi, W. Choi, S. Kang, and W. Choi, “Deep optical imaging within complex scattering media,” Nat. Rev. Phys. 2(3), 141–158 (2020). [CrossRef]  

3. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7(8), 603–614 (2010). [CrossRef]  

4. S. Gigan, O. Katz, H. B. De Aguiar, et al., “Roadmap on wavefront shaping and deep imaging in complex media,” JPhys Photonics 4(4), 042501 (2022). [CrossRef]  

5. J. Bertolotti and O. Katz, “Imaging in complex media,” Nat. Phys. 18(9), 1008–1017 (2022). [CrossRef]  

6. S. Jeong, Y.-R. Lee, W. Choi, S. Kang, J. H. Hong, J.-S. Park, Y.-S. Lim, H.-G. Park, and W. Choi, “Focusing of light energy inside a scattering medium by controlling the time-gated multiple light scattering,” Nat. Photonics 12(5), 277–283 (2018). [CrossRef]  

7. L. Devaud, B. Rauer, M. Kühmayer, J. Melchard, M. Mounaix, S. Rotter, and S. Gigan, “Temporal light control in complex media through the singular-value decomposition of the time-gated transmission matrix,” Phys. Rev. A 105(5), L051501 (2022). [CrossRef]  

8. I. M. Vellekoop and A. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]  

9. R. Cao, F. de Goumoens, B. Blochet, J. Xu, and C. Yang, “High-resolution non-line-of-sight imaging employing active focusing,” Nat. Photonics 16(6), 462–468 (2022). [CrossRef]  

10. S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

11. H. Lee, S. Yoon, P. Loohuis, J. H. Hong, S. Kang, and W. Choi, “High-throughput volumetric adaptive optical imaging using compressed time-reversal matrix,” Light: Sci. Appl. 11(1), 16 (2022). [CrossRef]  

12. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22(14), 16945–16955 (2014). [CrossRef]  

13. L. Pan, Y. Shen, J. Qi, J. Shi, and X. Feng, “Single photon single pixel imaging into thick scattering medium,” Opt. Express 31(9), 13943–13958 (2023). [CrossRef]  

14. X. Xu, X. Xie, H. He, H. Zhuang, J. Zhou, A. Thendiyammal, and A. P. Mosk, “Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function,” Opt. Express 25(26), 32829–32840 (2017). [CrossRef]  

15. T. Wu, J. Dong, and S. Gigan, “Non-invasive single-shot recovery of a point-spread function of a memory effect based scattering imaging system,” Opt. Lett. 45(19), 5397–5400 (2020). [CrossRef]  

16. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

17. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

18. A. Boniface, J. Dong, and S. Gigan, “Non-invasive focusing and imaging in scattering media with a fluorescence-based transmission matrix,” Nat. Commun. 11(1), 6154 (2020). [CrossRef]  

19. L. Zhu, F. Soldevila, C. Moretti, A. d’Arco, A. Boniface, X. Shao, H. B. de Aguiar, and S. Gigan, “Large field-of-view non-invasive imaging through scattering layers using fluctuating random illumination,” Nat. Commun. 13(1), 1447 (2022). [CrossRef]  

20. X. Wang, H. Liu, M. Chen, Z. Liu, and S. Han, “Imaging through dynamic scattering media with stitched speckle patterns,” Chin. Opt. Lett. 18(4), 042604 (2020). [CrossRef]  

21. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

22. V. Boominathan, J. T. Robinson, L. Waller, and A. Veeraraghavan, “Recent advances in lensless imaging,” Optica 9(1), 1–16 (2022). [CrossRef]  

23. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018). [CrossRef]  

24. M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(03), 1 (2019). [CrossRef]  

25. X. Hu, J. Zhao, J. E. Antonio-Lopez, S. Gausmann, R. A. Correa, and A. Schülzgen, “Adaptive inverse mapping: a model-free semi-supervised learning approach towards robust imaging through dynamic scattering media,” Opt. Express 31(9), 14343–14357 (2023). [CrossRef]  

26. B. Lin, X. Fan, and Z. Guo, “Self-attention module in a multi-scale improved u-net (sam-miu-net) motivating high-performance polarization scattering imaging,” Opt. Express 31(2), 3046–3058 (2023). [CrossRef]  

27. M. Yang, Z.-H. Liu, Z.-D. Cheng, J.-S. Xu, C.-F. Li, and G.-C. Guo, “Deep hybrid scattering image learning,” J. Phys. D: Appl. Phys. 52(11), 115105 (2019). [CrossRef]  

28. S. Zhu, E. Guo, J. Gu, L. Bai, and J. Han, “Imaging through unknown scattering media based on physics-informed learning,” Photonics Res. 9(5), B210–B219 (2021). [CrossRef]  

29. S. Zhu, E. Guo, Q. Cui, L. Bai, J. Han, and D. Zheng, “Locating and imaging through scattering medium in a large depth,” Sensors 21(1), 90 (2020). [CrossRef]  

30. S. Zhu, E. Guo, K. Bai, W. Zhang, L. Bai, and J. Han, “Displacement-sensible imaging through unknown scattering media via physics-aware learning,” Opt. Lasers Eng. 160, 107292 (2023). [CrossRef]  

31. E. Guo, Y. Sun, S. Zhu, D. Zheng, C. Zuo, L. Bai, and J. Han, “Single-shot color object reconstruction through scattering medium based on neural network,” Opt. Lasers Eng. 136, 106310 (2021). [CrossRef]  

32. S. Zhu, E. Guo, J. Gu, Q. Cui, C. Zhou, L. Bai, and J. Han, “Efficient color imaging through unknown opaque scattering layers via physics-aware learning,” Opt. Express 29(24), 40024–40037 (2021). [CrossRef]  

33. B. Rahmani, D. Loterie, E. Kakkava, N. Borhani, U. Teğin, D. Psaltis, and C. Moser, “Actor neural networks for the robust control of partially measured nonlinear systems showcased for image propagation through diffuse media,” Nat. Mach. Intell. 2(7), 403–410 (2020). [CrossRef]  

34. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018). [CrossRef]  

35. Y. Li, S. Cheng, Y. Xue, and L. Tian, “Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network,” Opt. Express 29(2), 2244–2257 (2021). [CrossRef]  

36. M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019). [CrossRef]  

37. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

38. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61(7), 834–837 (1988). [CrossRef]  

39. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]  

40. S. Rotter and S. Gigan, “Light fields in complex media: Mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89(1), 015005 (2017). [CrossRef]  

41. H. Liu, Z. Liu, M. Chen, S. Han, and L. V. Wang, “Physical picture of the optical memory effect,” Photonics Res. 7(11), 1323–1330 (2019). [CrossRef]  

42. X. Wang, X. Jin, and J. Li, “Blind position detection for large field-of-view scattering imaging,” Photonics Res. 8(6), 920–928 (2020). [CrossRef]  

43. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

44. I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (Massachusetts Institute of Technology, 2016).

45. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

46. Y. LeCun, C. Cortes, and C. J. C. Burges, “THE MNIST DATABASE of handwritten digits,” http://yann.lecun.com/exdb/mnist/.

47. C. E. Thomaz, “FEI Face Database,” https://fei.edu.br/cet/facedatabase.html.

48. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision, (Springer, 2014), pp. 740–755.

49. J.-M. Geusebroek, G. J. Burghouts, and A. W. Smeulders, “The amsterdam library of object images,” Int. J. Comput. Vis. 61(1), 103–112 (2005). [CrossRef]  

50. H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comput. Imaging 3(1), 47–57 (2017). [CrossRef]  

51. C. A. Metzler, F. Heide, P. Rangarajan, M. M. Balaji, A. Viswanath, A. Veeraraghavan, and R. G. Baraniuk, “Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging,” Optica 7(1), 63–71 (2020). [CrossRef]  

52. E. Guo, C. Zhou, S. Zhu, L. Bai, and J. Han, “Dynamic imaging through random perturbed fibers via physics-informed learning,” Opt. Laser Technol. 158, 108923 (2023). [CrossRef]  

53. Y. Shi, E. Guo, M. Sun, L. Bai, and J. Han, “Non-invasive imaging through scattering medium and around corners beyond 3d memory effect,” Opt. Lett. 47(17), 4363–4366 (2022). [CrossRef]  

54. D. Wang, S. K. Sahoo, X. Zhu, G. Adamo, and C. Dang, “Non-invasive super-resolution imaging through dynamic scattering media,” Nat. Commun. 12(1), 1–9 (2021). [CrossRef]  

55. F. Wang, C. Wang, M. Chen, W. Gong, Y. Zhang, S. Han, and G. Situ, “Far-field super-resolution ghost imaging with a deep neural network constraint,” Light: Sci. Appl. 11(1), 1–11 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Speckle recombination after reassignment and corresponding autocorrelation. (b) The normalized intensity values of the white dash lines of (a). (c) The autocorrelations of GT and superposition with different times of reassignment, the images below are the corresponding reconstruction. N, the reassignment number with different bootstrapped results. (d) The evolution of the CC and PSNR of different N. (e) The normalized intensity values of the white dash lines of (c). The evolution of the CC and PSNR with GT in (a) and (c) are also presented.
Fig. 2.
Fig. 2. The superposition with different times of reassignment and the corresponding reconstruction. N, the reassignment number with different bootstrapped results. The evolution of the CC and PSNR with GT are also presented.
Fig. 3.
Fig. 3. Schematic of the physics-aware learning framework via deep speckle reassignment. (a) In-depth physics-aware pre-processing step. (b) Neural network post-processing step.
Fig. 4.
Fig. 4. Experimental configuration for the bootstrapped imaging through unknown diffusers with limited speckle grains. (a)-(c) Collected speckle patterns with different response via an adjustable NDFs, and the speckles within the blue dashed line is the useful area which contain the effective information.
Fig. 5.
Fig. 5. Comparison results without or with the bootstrapped step.
Fig. 6.
Fig. 6. Generalization reconstruction results with unknown diffuser (D2) and unknown exposure states via the proposed method and directly imaging with raw speckles with known diffuser (D1) and unknown exposure states (N=4).
Fig. 7.
Fig. 7. Testing results for generalization reconstruction of Group 3 (N=6).
Fig. 8.
Fig. 8. (a) Wide-spectrum bootstrapped enhancement with speckle reassignment and corresponding autocorrelation. (b) Testing results for generalization reconstruction of Group 4 (N=12).
Fig. 9.
Fig. 9. Comparison results without or with the bootstrapped enhancement step for imaging exceeding ME range (N=6).

Tables (1)

Tables Icon

Table 1. Quantitative evaluation results of generalization results

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I = O S ,
I I = ( O S ) ( O S ) = ( O O ) ( S S ) ,
I I = ( O O ) + C ,
R ( x , y ) = I ( x , y ) I ( x , y ) = F F T 1 { | F F T { I ( x , y ) } | 2 } .
U T M ( a ξ , b η ) = e j 2 π u / λ j λ 2 u e j π λ u ( a ξ 2 + b η 2 ) ,
h v ( a ξ , b η ) = e j 2 π u / λ v λ 2 u ξ m η n T M ( a ξ , b η ) e j π λ u ( a ξ 2 + b η 2 ) e j 2 π λ v 2 + ( a ξ a ) 2 + ( b η b ) 2 v 2 + ( a ξ a ) 2 + ( b η b ) 2 ,
I v = O | h v ( a ξ , b η ) | 2 = O | e j 2 π u / λ v λ 2 u ξ m η n T M ( a ξ , b η ) e j π λ u ( a ξ 2 + b η 2 ) e j 2 π λ v 2 + ( a ξ a ) 2 + ( b η b ) 2 v 2 + ( a ξ a ) 2 + ( b η b ) 2 | 2 = O S v ( a ξ , b η ) .
I I = ( O O ) + C ( a ξ , b η ) ,
L o s s = L o s s N P C C + L o s s M S E ,
L o s s N P C C = 1 × x = 1 w y = 1 h [ i ( x , y ) i ^ ] [ I ( x , y ) I ^ ] x = 1 w y = 1 h [ i ( x , y ) i ^ ] 2 x = 1 w y = 1 h [ I ( x , y ) I ^ ] 2 ,
L o s s M S E = x = 1 w y = 1 h | i ~ ( x , y ) I ( x , y ) | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.