Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Lightweight super-resolution multimode fiber imaging with regularized linear regression

Open Access Open Access

Abstract

Super-resolution multimode fiber imaging provides the means to image samples quickly with compact and flexible setups finding many applications from biology and medicine to material science and nanolithography. Typically, fiber-based imaging systems suffer from low spatial resolution and long measurement times. State-of-the-art computational approaches can achieve fast super-resolution imaging through a multimode fiber probe but currently rely on either per-sample optimised priors or large data sets with subsequent long training and image reconstruction times. This unfortunately hinders any real-time imaging applications. Here we present an ultimately fast non-iterative algorithm for compressive image reconstruction through a multimode fiber. The proposed approach helps to avoid many constraints by determining the prior of the target distribution from a simulated set and solving the under-determined inverse matrix problem with a mathematical closed-form solution. We have demonstrated theoretical and experimental evidence for enhanced image quality and sub-diffraction spatial resolution of the multimode fiber optical system.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Endoscopes play a key role in the examination of deep tissues and imaging in locations, which are impossible to reach using conventional optical microscopy. As optical fibers offer a vast range of convenient properties, a broad variety of applications from bioimaging to semiconductor metrology strive to utilize this technology [1]. Multimode fiber (MMF) imaging provides the highest information density on the given footprint, which is very promising for endo-microscopy and possible integration with other fiber-based sensors [25]. However, imaging using an MMF introduces additional challenges, which require advanced computational algorithms like transmission matrix measurements, compressive ghost imaging and holographic light shaping [68]. In MMF imaging, the same fiber is used to illuminate the sample and capture the signal. To simplify the optical setup, the total integrated intensity is recorded using a bucket detector instead of a camera. The light is scrambled while it propagates through an MMF, therefore a series of randomly varying illumination patterns are produced and projected onto the sample and an advanced computational algorithm is needed to reconstruct the image by solving a convex optimization problem to invert this projection process.

Super-resolution compressive MMF imaging increases the detection sensitivity, spatial resolution, and imaging speed [911]. However, the reconstruction process becomes even more computationally expensive because an under-determined system of equations has to be solved. Conventionally, images are reconstructed by minimizing the least-squares loss with an additional regularization term exploiting the sparsity by using an iterative solver [1216]. Over the last years, an increasing number of deep learning based approaches have been developed, outperforming conventional techniques in terms of image quality and not relying on this sparsity constraint [1726]. However, these approaches typically require a large data set of labeled measurements to train the network, which match the expected samples. Obtaining this labeled data set as well as these iterative reconstruction procedures contradict the idea of a fast and flexible technique.

In this work, we present a physical-model-based ultimately fast non-iterative algorithm that successfully retrieves an object function in multimode fiber imaging experiments. We propose reg. least square linear regression to solve the ill-posed inverse matrix problem and reconstruct an image with super-resolution. Short reconstruction time is ensured by employing a physical forward model to simulate the data set and regularization enforces generalization beyond the simulation. Additionally, the well-known closed form solution (ridge regression) enables fast and reliable application of our technique.

In section 2 the mathematical background of linear regression and of the physical model is explained. Then in section 3 the technique is applied in a completely simulated setup section 4 tests its performance in experiments. Lastly, section 5 summarizes the findings.

2. Theory

We use the fact that despite the propagation of light in an MMF is complex and leads to randomly looking speckle patterns – it’s deterministic and linear [27]. We use a regularization technique known as ridge regression [2830]. In the proposed MMF imaging approach, we first construct the forward model as it’s easier to be measured experimentally, which is extremely important for practical applications. Then, we develop a computational workflow that inverts the object function by using ridge regression on the signal, which is predicted by the forward model. Finally, applying this inverted object function to experimentally measured intensities enables ultimately fast and non-iterative reconstructions.

2.1 Physical forward model

Our experimental setup for super-resolution compressive imaging is presented in Fig. 1(a) and consists of a laser source whose beam is transmitted through an MMF and projected into the camera and on the sample using a 4$f$-system. Through the excitations of different sets of modes in the MMF, different intensity patterns are produced on the output facet of the fiber. These images are projected by two lenses and split by the beamsplitter before reaching the sample or the camera. The integrated intensity transmitted by the sample is measured by the avalanche photodiode. The sample is placed between the beamsplitter and avalanche photodiode in the image plane of the tube lens. Samples are illuminated sequentially with different magnified images of the output facet of the MMF (speckle patterns), which are also captured by the camera. One measurement event includes the projection of a single speckle pattern and recording the total signal from a sample.

 figure: Fig. 1.

Fig. 1. Illustration of the experimental setup (1a) and its physical forward model (1b). Different random speckle from the MMF are projected sequentially onto the sample and camera. The photodiode records the integrated intensity of the transmitted light through the sample and the camera takes a picture of the speckle pattern, which is used to construct the measurement matrix $A$. This processed is modeled by $Ay=x$ with $y$ being the flattened sample image and $x$ consist of the integrated intensities for each speckle pattern. $width\times height$ is the number of pixels of the speckle pattern and $training\,size$ is the number of elements in the data set. Graphics by Alexander Franzen/ CC BY-NC 3.0

Download Full Size | PDF

The physical forward model can be summarize as a system of linear equations. The process of projecting different speckle patterns on the sample and measuring the transmitted intensity is formulated by:

$$x=Ay,$$
where $y$ is a sample, $x$ is the integrated intensity measurement and $A$ is the measurement matrix, which consists of stacked and flattened illumination patterns (captured by the camera). Figure 1(b) depicts the model graphically.

To reconstruct an unknown image from measurements, the main challenge is to invert the matrix $A$. This challenge becomes even more difficult for super-resolution imaging, where no mathematical well defined inverse exists anymore, because the number of measurements is smaller than the number of pixels in the sample and therefore $A$ is under-determined. Machine Learning offers an opportunity to utilize knowledge about the sample and the experimental setup which is learned during the calibration phase to improve the reconstructed images.

Our proposed algorithm is not limited to this specific experimental setup, but instead can be applied to any linearizable problem. Specifically the extension to complex valued problems is possible. Hence, it is for example applicable to computational imaging settings like Holography [31], Tomography [32] or MRI [33].

2.2 Regularized linear regression

Linear regression can be used to fit a theoretical model to a data set composed of labeled measurements under the assumption of a linear relationship between a set of dependent variables and a set of explanatory variables. Since the inverted physical forward model $y=A^{-1}x$ represents such a linear relation, we propose linear regression to find the best left inverse $A^{-1}$.

We investigate the multidimensional case, where the explanatory variables $x\in \mathbb {R}^{M}$ (intensity measurement) and the dependent variables $y\in \mathbb {R}^{V}$ (flattened sample image) are vectors. We focus on the super-resolution region with $M\ll V$. The linear model $y(x;W)=Wx$ produced by linear regression is then parameterized by the weight matrix $W\in \mathbb {R}^{V\times M}$. The loss function for the optimization consists of the fidelity term $\mathcal {L}_f$, which is used to evaluate how well the model fits experimentally measured data, and regularization term $\mathcal {L}_r$ with the hyperparameter $\gamma$. Linear regression constructs the best linear fit to the series of data points by minimizing the sum of squared errors (SSE). SSE is the most commonly used loss function for regression, and it calculates the sum of squared differences between the true and predicted values for each data point.

Given some data set with $N$ data points $\{(X_i, Y_i)\}^N_{i=0}$ with $X\in \mathbb {R}^{M\times N}$ and $Y\in \mathbb {R}^{V\times N}$, the procedure minimizes the loss

$$\mathcal{L}=\underbrace{\sum_n^N \|Y_n - WX_n\|_2^2}_\mathrm{SSE} + \gamma\sum_{i,j}\|W_{i,j}\|_2^2,$$
which is composed of the SSE and a regularization term. Increasing regularization parameter $\gamma$ reduces the over-fitting to the data set and increase the generalization of the model. To reach the best performance, the data set must be sufficiently large and contain data points from all relevant regions of the parameter space to enable the model to generalize well.

This minimization problem has a closed-form solution, which is calculated by taking the derivative of $\mathcal {L}$ with respect to the weight $W$ and setting it to zero:

$$\begin{aligned} \frac{\partial\mathcal{L}}{\partial W_{}}&\overset{!}=0\\ \Leftrightarrow0&={-}(Y - WX)X^T + \gamma W\\ \Leftrightarrow W&=YX^T(XX^T+\gamma\mathbb{I})^{{-}1} \end{aligned}$$

The chosen regularization parameter $\gamma$ influences the effective condition number of $XX^T+\gamma \mathbb {I}$ and thus ensures the inverse is well-conditioned.

The quality of the reconstructed image $y$ with respect to the true sample image $z$ is quantified using the Pearson correlation coefficient [34]

$${\displaystyle r={\frac {\sum z_{i}y_{i}-n{\bar {z}}{\bar {y}}}{(n-1)s_{z}s_{y}}},}$$
with the number of pixels in the images $n$, the mean $\bar y$ and standard deviation $s_y$ of $y$ and analogous for $z$. The correlation coefficient $r$ ranges from -1 to 1, where 0 indicates no reconstruction and $+1$ means perfect reconstruction.

3. Simulations

First, we evaluate the performance of the MMF compressive imaging approach with reg. linear regression on simulated data. Our test samples consist of $28\times 28$ pixel images (flattened to $784$ integers) from the MNIST [35] data set, which contains $70\,000$ handwritten digits. This data is split into a training set of $60\,000$ images and a test set of $10\,000$ images. While the linear regression is performed on the training set, the test set is held back and only used to evaluate the performance of the algorithm in the final step. The images are flattened and normalized to mean and standard deviation equal to one. The image size mainly influence the computational complexity and is not expected to have a direct impact on the performance of the model.

All calculations are performed on a Windows Server 2019 with a AMD Ryzen Threadripper 3970X processor with 32 cores (3693 MHz, 64 logical processors). The total available physical memory is 128 GB (3200 MHz). The algorithm was implemented using Python [36] with the packages numpy [37] , plotly [38] and matplotlib [39].

The speckle illumination patterns, which determine the measurement matrix of the physical forward model, are simulated by first sampling 784 values from a complex Gaussian distribution. Then, the diffraction limit of the optical system is simulated by applying a low-pass frequency filter with a cutoff frequency $\nu$ to the random field with $\nu \in (0,1]$ normalized to half of the spatial frequency spectrum. Finally, the intensity of the flattened speckle fields form the rows of the forward model matrix $A$ (see Fig. 1(b)).

The intensity measurements $X$ are simulated by the physical forward model (see Eq. 1) using the simulated measurement matrix $A$ and the flattened MNIST sample images $Y$. Then $X$ and $Y$ of the training set are plugged into the reg. linear regression closed form solution Eq. 3 to approximate $A^{-1}$. Finally, $A^{-1}$ can be used to reconstruct the image from the simulated intensity measurements of the test set.

A well known iterative method to reconstruct sparse images from compressive sensing measurements–basis pursuit denoising (BPDN) – is used as a reference. It finds the reconstructed image $y$ by

$${\displaystyle \min _{y}\|y\|_{1}{\text{ subject to }}\|x-Ay\|_{2}\leq \sigma ,}$$
where $\|y\|_1$ is the $l_1$ norm of vector $y$ and $\sigma$ is a hyper-parameter for the level of noise. The $l_1$ norm is used here to enforce sparsity in the reconstructed image. Improvements upon BPDN would benefit many research directions and technical applications, e.g., silicon electro-optical sensors [40].

Figure 2 shows the average correlation coefficient $r$ (Eq. 4) of the reconstructed images for different $\nu$ and number of measurements $m$. The average is calculated over 5 different simulated measurement matrices. The performance of reg. linear regression (left) and BPDN (right) are shown for comparison. Any correlation above $90{\%}$ is highlighted using a colored scale, while any value below is plotted using gray scale.

 figure: Fig. 2.

Fig. 2. Correlation between the reconstructed and true image using reg. linear regression for different cutoff frequencies $\nu$ and number of measurements $m$ is shown. The regularization parameter $\gamma$ is set to $100$ and the training set contains the full $60\,000$ images. For each $\nu$ and $m$ the shown correlation is the maximum over 5 different simulated measurement matrices, where the correlation for each matrix is averaged over 5 reconstructed samples. BPDN is applied to the same 25 testing images for each $\nu$ and $m$ with $\sigma =0.1$.

Download Full Size | PDF

The top right corner of the plots clearly exhibits the best performance for both algorithms. There, the cutoff frequency only affects high spatial frequencies and the number of measurements has the same order of magnitude as the number of reconstructed pixels making $A^{-1}$ a well-conditioned problem. Moving vertically down in the plot means reducing the number of measurements exponentially, hence exploring the super-resolution regime. Moving horizontally left means increasing the blurring effect due to diffraction limitations and thus increasing the difficulty to reconstruct detailed samples.

Reg. linear regression performs much better than the BPDN algorithm, specifically for super-resolution. It only requires a fraction of the measurements to achieve a comparable reconstruction result. Furthermore, it also works better for low $\nu$ and the performance does not fall as sharply as for the BPDN algorithm. Additionally, the proposed reg. linear regression approach is much faster than BPDN, even if including the calibration time. In practice, the calibration is performed once per setup and does not contribute to the reconstruction time of an individual image. In contrast, BPDN performs the iterative optimization for every reconstructed image independently.

For all parameter combinations depicted in Fig. 2, BPDN performed 5 reconstructions in less than $44~$ sec, while the proposed approach spend at most $1~$sec for the calibration and the reconstruction of 5 images.

Next, the performance with respect to different number of data points in the training set is shown in Fig. 3. So far the entire $60\,000$ samples have been used, but smaller data sets are generally easier available. As one can see, the performance increases dramatically for the sample sizes below $1\,000$, but there is not much improvement beyond that point. Although $1\,000$ samples seems to be enough to achieve near optimal results, reg. linear regression displays stable performance when moving to lower sample numbers without any sharp cutoff in performance.

 figure: Fig. 3.

Fig. 3. Correlation between the reconstructed and true image using reg. linear regression for different cutoff frequencies $\nu$, number of measurements $m$ and number of training samples. The regularization parameter is set to $\gamma =100$.

Download Full Size | PDF

In the next set of simulations, we tested how stable reg. linear regression behaves with respect to noise. We assume standard normal distributed noise with different signal-to-noise ratios (SNR). The noise is added to the simulated intensity measurements $X$. Since the speckle patterns are also recorded with noise, a complex noise with the same SNR is added to the patterns before calculating their intensity. Figure 4 shows the comparison between the reconstructed samples using reg. linear regression and BPDN at SNR of 30 dB and 10 dB.

 figure: Fig. 4.

Fig. 4. The reconstruction results from reg. linear regression (reg. LR) and BPDN are directly compared for different number of measurements $m$ and two different signal-to-noise ratios (SNR), while the cutoff frequency is $\nu =0.3$. In each image, the correlation with the original sample is given. BPDN uses $\sigma =0.1$ and $\sigma =0.7$ for the 30 dB and 10 dB respectively, which give the best reconstruction results in our test.

Download Full Size | PDF

The performance of BPDN is much more deteriorated by noise, while reg. linear regression stays around $r\approx 95{\%}$ in the large $m$ regime and around $r\approx 80{\%}$ in the lower $m$ regime. The lower reconstruction scores of BPDN for the largest $m$ is caused by the reconstruction of noise. The large number of measurements enable BPDN to reconstruct the measured noise, which is not part of the ground truth sample and thereby reducing the reconstruction score. However, this effect vanishes in the super-resolution regime.

The hyper parameter $\gamma$ is chosen to satisfy the conflicting goals of stabilizing the model while at the same time being flexible enough for all kinds of different measurements. It could be tuned to improve the performance of the model. However, the value depends on a lot different quantities like training set size, noise levels, variety in the data and more. Therefore, it can be difficult to determine the optimal value from theory alone, but fortunately one can simply test different values and compare the performance of the model on the training set. In this work, there was no fine tuning performed on $\gamma$, because this would go beyond the scope of a proof of concept.

4. Experimental validation

In this section the same reconstruction procedure as in section3 is applied to experimentally recorded speckle illumination patterns and intensity measurements (data from [41,42]). The measurement matrix is now constructed from the experimental speckle patterns and, just as before, is used to simulate the training set from the MNIST data. In this way, only the speckle illumination patterns have to be measured experimentally to fit the least square linear regression model.

The reconstruction performance is again measured by calculating the correlation coefficient between the reconstructed and ground truth image, but now using experimentally measured intensities in the reconstruction process. Because of the experimental prerequisites and noise, the measured intensities will deviate from the ideal prediction of the physical forward model based on the previously recorded speckle patterns. Hence, the performance is expected to decrease compared to section3, because of the mismatch between the simulated training measurements and the experimental testing measurement. Meanwhile BPDN does not rely on any simulated training measurements, it is still effected in a similar way, because of the incoherency of the measurement matrix and the testing intensity measurements.

For the experiment, a handwritten digit from the MNIST testing set is prepared on a microscope slide using maskless UV photolithography (365 nm) and lift off of sputtered reflective 200-nm-thick aluminium film. The ground truth is obtained by separately measuring the handwritten digit with bright-field microscopy and then cropping and normalizing the image to the pre-established format.

During the experiment $2\,000$ speckle illumination pattern and measurements with corresponding intensity measurements were recorded. However, only a random subset with a specified number of measurements $m$ is selected during the analysis for the reconstruction procedure and different random selections for a given $m$ are used to evaluate the stability of the algorithm.

Figure 5 shows the averaged reconstruction score for different number of measurements $m$ for reg. regression and BPDN. Our method clearly excels the peak performance of BPDN for any $m$. The stability is also improved by exhibiting smaller standard deviation for medium and large $m$. These result testify to the improved performance and robustness of our new algorithm even in experimental setups. Combined with the increased speed and flexibility of our approach, reg. linear regression also shows huge benefits in practice.

 figure: Fig. 5.

Fig. 5. Correlation $r$ between the reconstructed and ground truth image using reg. linear regression and BPDN for different number of measurements $m$. The mean and standard deviation for each $m$ are calculated from 50 replications using different speckle patterns and corresponding intensity measurements from the experiment. The regularization parameter is set to $\log _{10}(\gamma )=12$ and $\sigma =0.4$. 50 replications are chosen as a trade off between the certainty of the results and reasonable computation and experimentation times.

Download Full Size | PDF

5. Conclusion

We present the application of reg. linear regression for super-resolution MMF imaging in simulated and experimental environments. In both cases, we demonstrate improved reconstruction quality compared to a traditional reconstruction method and improved overall stability and resilience to noise. Due to the closed form solution approach of reg. linear regression, it also comes with an astonishing speed-up, because no iterative steps are performed. These results prompt the exploration of further extensions of this rudimentary approach. Adding an additional loss terms to enforce sparsity in the reconstructed output or using a more complex inverse model (artificial neural network), we leave for future work.

Funding

National Growth Fund program NXTGEN HIGHTECH.

Acknowledgments

Part of this work has been carried out within ARCNL, a public-private partnership between UvA, VU, NWO, and ASML. This work is made possible in part by a contribution from the National Growth Fund program NXTGEN HIGHTECH through the “(Nano) Metrology Systems” project.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. Cao, T. Čižmár, and S. Turtaev, “Controlling light propagation in multimode fibers for imaging, spectroscopy, and beyond,” Adv. Opt. Photon. 15(2), 524 (2023). [CrossRef]  

2. A. G. Leal-Junior, A. Frizera, C. Marques, et al., “Optical fiber specklegram sensors for mechanical measurements: A review,” IEEE Sensors J. 20(2), 569–576 (2020). [CrossRef]  

3. K. Wu, H. Zhang, and Y. Chen, “All-silicon microdisplay using efficient hot-carrier electroluminescence in standard 0.18µm cmos technology,” IEEE Electron Device Lett. 42(4), 541–544 (2021). [CrossRef]  

4. A. Pospori, C. A. F. Marques, and O. Bang, “Polymer optical fiber bragg grating inscription with a single uv laser pulse,” Opt. Express 25(8), 9028–9038 (2017). [CrossRef]  

5. C. A. F. Marques, A. Pospori, and D. Sáez-Rodríguez, “Aviation fuel gauging sensor utilizing multiple diaphragm sensors incorporating polymer optical fiber bragg gratings,” IEEE Sensors J. 16(15), 6122–6129 (2016). [CrossRef]  

6. I. M. Vellekoop, A. Lagendijk, and A. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4(5), 320–322 (2010). [CrossRef]  

7. T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3(1), 1027 (2012). [CrossRef]  

8. M. J. Padgett and R. W. Boyd, “An introduction to ghost imaging: quantum and classical,” Phil. Trans. R. Soc. A. 375(2099), 20160233 (2017). [CrossRef]  

9. L. V. Amitonova and J. F. De Boer, “Compressive imaging through a multimode fiber,” Opt. Lett. 43(21), 5427–5430 (2018). [CrossRef]  

10. L. V. Amitonova and J. F. de Boer, “Endo-microscopy beyond the abbe and nyquist limits,” Light: Sci. Appl. 9(1), 81 (2020). [CrossRef]  

11. M. Pascucci, S. Ganesan, and A. Tripathi, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019). [CrossRef]  

12. G. Calisesi, A. Ghezzi, and D. Ancora, “Compressed sensing in fluorescence microscopy,” Progress in Biophysics and Molecular Biology (2021).

13. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]  

14. S. Gazit, A. Szameit, Y. C. Eldar, et al., “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17(26), 23920 (2009). [CrossRef]  

15. N. Kulkarni, P. Nagesh, R. Gowda, et al., “Understanding Compressive Sensing and Sparse Representation-Based Super-Resolution,” IEEE Trans. Circuits Syst. Video Technol. 22(5), 778–789 (2012). [CrossRef]  

16. B. Lochocki, K. Abrashitova, J. F. de Boer, et al., “Ultimate resolution limits of speckle-based compressive imaging,” Opt. Express 29(3), 3943 (2021). [CrossRef]  

17. K. H. Jin, M. T. McCann, E. Froustey, et al., “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. on Image Process. 26(9), 4509–4522 (2017). [CrossRef]  

18. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

19. L. Li, H. Ruan, and C. Liu, “Machine-learning reprogrammable metasurface imager,” Nat. Commun. 10(1), 1–8 (2019). [CrossRef]  

20. M. Del Hougne, S. Gigan, and P. Del Hougne, “Deeply subwavelength localization with reverberation-coded aperture,” Phys. Rev. Lett. 127(4), 043903 (2021). [CrossRef]  

21. M. A. B. Abbasi, M. O. Akinsolu, and B. Liu, “Machine learning-assisted lens-loaded cavity response optimization for improved direction-of-arrival estimation,” Sci. Rep. 12(1), 8511 (2022). [CrossRef]  

22. M. W. Matthès, Y. Bromberg, J. de Rosny, et al., “Learning and avoiding disorder in multimode fibers,” Phys. Rev. X 11(2), 021060 (2021). [CrossRef]  

23. B. Rahmani, D. Loterie, and G. Konstantinou, “Multimode optical fiber transmission with a deep learning network,” Light: Sci. Appl. 7(1), 69 (2018). [CrossRef]  

24. H. Chen, Z. He, and Z. Zhang, “Binary amplitude-only image reconstruction through a mmf based on an ae-snn combined deep learning model,” Opt. Express 28(20), 30048–30062 (2020). [CrossRef]  

25. N. Borhani, E. Kakkava, C. Moser, et al., “Learning to see through multimode fibers,” Optica 5(8), 960–966 (2018). [CrossRef]  

26. P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibers,” Opt. Express 27(15), 20241–20258 (2019). [CrossRef]  

27. A. W. Snyder and J. D. Love, Optical waveguide theory (Kluwer, 2000).

28. G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM Journal on Matrix Analysis and Applications 21(1), 185–194 (1999). [CrossRef]  

29. W. N. van Wieringen, “Lecture notes on ridge regression," (2023).

30. M. Hanke and P. C. Hansen, “Regularization methods for large-scale problems,” (1993).

31. D. J. Brady, K. Choi, and D. L. Marks, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009). [CrossRef]  

32. L. Donati, M. Nilchian, and S. Trépout, “Compressed sensing for stem tomography,” Ultramicroscopy 179, 47–56 (2017). [CrossRef]  

33. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse mri: The application of compressed sensing for rapid mr imaging,” Magn. Reson. Med. 58(6), 1182–1195 (2007). [CrossRef]  

34. V. Starovoitov, E. Eldarova, and K. Iskakov, “Comparative analysis of the SSIM index and the pearson coefficient as a criterion for image similarity,” Eurasian Journal of Mathematical and Computer Applications 8(1), 76–90 (2020). [CrossRef]  

35. L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Process. Mag. 29(6), 141–142 (2012). [CrossRef]  

36. G. Van Rossum and F. L. Drake, Python 3 Reference Manual (CreateSpace, Scotts Valley, CA, 2009).

37. C. R. Harris, K. J. Millman, and S. J. van der Walt, “Array programming with NumPy,” Nature 585(7825), 357–362 (2020). [CrossRef]  

38. P. T. Inc., Collaborative data science (2015).

39. J. D. Hunter, “Matplotlib: A 2d graphics environment,” Comput. Sci. Eng. 9(3), 90–95 (2007). [CrossRef]  

40. K. Xu, “Silicon electro-optic micro-modulator fabricated in standard cmos technology as components for all silicon monolithic integrated optoelectronic systems*,” J. Micromech. Microeng. 31(5), 054001 (2021). [CrossRef]  

41. K. Abrashitova and L. V. Amitonova, “High-speed label-free multimode-fiber-based compressive imaging beyond the diffraction limit,” Opt. Express 30(7), 10456 (2022). [CrossRef]  

42. W. Li, K. Abrashitova, G. Osnabrugge, et al., “Generative Adversarial Network for Superresolution Imaging through a Fiber,” Phys. Rev. Applied 18(3), 034075 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Illustration of the experimental setup (1a) and its physical forward model (1b). Different random speckle from the MMF are projected sequentially onto the sample and camera. The photodiode records the integrated intensity of the transmitted light through the sample and the camera takes a picture of the speckle pattern, which is used to construct the measurement matrix $A$. This processed is modeled by $Ay=x$ with $y$ being the flattened sample image and $x$ consist of the integrated intensities for each speckle pattern. $width\times height$ is the number of pixels of the speckle pattern and $training\,size$ is the number of elements in the data set. Graphics by Alexander Franzen/ CC BY-NC 3.0
Fig. 2.
Fig. 2. Correlation between the reconstructed and true image using reg. linear regression for different cutoff frequencies $\nu$ and number of measurements $m$ is shown. The regularization parameter $\gamma$ is set to $100$ and the training set contains the full $60\,000$ images. For each $\nu$ and $m$ the shown correlation is the maximum over 5 different simulated measurement matrices, where the correlation for each matrix is averaged over 5 reconstructed samples. BPDN is applied to the same 25 testing images for each $\nu$ and $m$ with $\sigma =0.1$.
Fig. 3.
Fig. 3. Correlation between the reconstructed and true image using reg. linear regression for different cutoff frequencies $\nu$, number of measurements $m$ and number of training samples. The regularization parameter is set to $\gamma =100$.
Fig. 4.
Fig. 4. The reconstruction results from reg. linear regression (reg. LR) and BPDN are directly compared for different number of measurements $m$ and two different signal-to-noise ratios (SNR), while the cutoff frequency is $\nu =0.3$. In each image, the correlation with the original sample is given. BPDN uses $\sigma =0.1$ and $\sigma =0.7$ for the 30 dB and 10 dB respectively, which give the best reconstruction results in our test.
Fig. 5.
Fig. 5. Correlation $r$ between the reconstructed and ground truth image using reg. linear regression and BPDN for different number of measurements $m$. The mean and standard deviation for each $m$ are calculated from 50 replications using different speckle patterns and corresponding intensity measurements from the experiment. The regularization parameter is set to $\log _{10}(\gamma )=12$ and $\sigma =0.4$. 50 replications are chosen as a trade off between the certainty of the results and reasonable computation and experimentation times.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

x = A y ,
L = n N Y n W X n 2 2 S S E + γ i , j W i , j 2 2 ,
L W = ! 0 0 = ( Y W X ) X T + γ W W = Y X T ( X X T + γ I ) 1
r = z i y i n z ¯ y ¯ ( n 1 ) s z s y ,
min y y 1  subject to  x A y 2 σ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.