Abstract

Image reconstruction under multiple light scattering is crucial in a number of applications such as diffraction tomography. The reconstruction problem is often formulated as a nonconvex optimization, where a nonlinear measurement model is used to account for multiple scattering and regularization is used to enforce prior constraints on the object. In this paper, we propose a powerful alternative to this optimization-based view of image reconstruction by designing and training a deep convolutional neural network that can invert multiple scattered measurements to produce a high-quality image of the refractive index. Our results on both simulated and experimental datasets show that the proposed approach is substantially faster and achieves higher imaging quality compared to the state-of-the-art methods based on optimization.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The problem of reconstructing the spatial distribution of the dielectric permittivity of an unknown object from the measurements of the light it scatters is common in many applications such as tomographic microscopy [1] and digital holography [2]. The problem is often formulated as a linear inverse problem by adopting scattering models based on the first Born [3] or Rytov [4] approximations. However, these linear approximations are inaccurate when scattering is strong, which leads to reconstruction artifacts for objects that are large or have high permittivity contrasts [5]. For strongly scattering objects, it is preferable to use nonlinear measurement models that can account for multiple light scattering inside the object [6–14].

When adopting a nonlinear measurement model, it is common to formulate image reconstruction as an optimization problem. The objective function in the optimization typically includes two terms: a data-fidelity term that ensures that the final image is consistent with measured data, and a regularizer that mitigates the ill-posedness of the problem by promoting solutions with desirable properties [15]. For example, one of the most widely adopted regularizers is total variation (TV), which preserves image edges while promoting smoothness [16]. TV is often interpreted as a sparsity-enforcing 1-penalty on the image gradient and has proven to be successful in the context of diffraction tomography with and without multiple scattering [12,13,17–21].

Despite the recent progress in regularized image reconstruction under multiple scattering, the corresponding optimization problem is difficult to solve. The challenging aspects are the nonconvex nature of the objective and the large amount of data that needs to be processed in typical imaging applications. In particular, when the scattering is strong, the problem becomes highly nonconvex, which negatively impacts both the speed of reconstruction and the quality of the final image [14].

In this paper, we consider a fundamentally different approach to the problem of image reconstruction under multiple scattering. Recently, several results have interpreted multiple scattering as a forward-pass of a convolutional neural network (ConvNet) [10,13,20]. This view inspires us to reconstruct the object by designing another ConvNet that is specifically trained to invert multiple scattering in a purely data-driven fashion. While several recent publications have applied various deep learning architectures for solving linear inverse problems [22–32], we consider a fundamentally different situation where multiple scattering makes the measurement operator both nonlinear and object dependent (and hence unknown). Our approach is also related to the recent work on reverse photon migration for diffuse optical tomography [33]. However, our focus is on diffractive imaging, where the light propagation is assumed to be deterministic, rather than stochastic as in [33]. We extensively validated the proposed method on several simulated and real datasets by comparing the method against recent optimization-based approaches based on the Lippmann-Schwinger (LS) model [12,34]. Our results indicate that it is possible to invert multiple scattering by training a ConvNet, even when imaging strongly scattering objects for which optimization-based approaches underperform. To the best of our knowledge, the results here are the first to show the potential of deep learning to reconstruct high-quality images from multiple scattered light measurements.

2. Nonlinear diffractive imaging

In this section, we describe the traditional optimization-based approach for nonlinear diffractive imaging. We start by describing the mathematical model for multiple scattering, and then review the basics of regularized image reconstruction.

Consider an object of the permittivity distribution ϵ(r) in the bounded domain Ω ⊆ ℝ2, immersed into a background medium of permittivity ϵb, and illuminated with the incident electric field uin(r) (see the schematic in Figure 1). The incident field uin is assumed to be monochromatic and coherent, and it is known inside Ω as well as at the locations of the sensors. The result of object-light interaction is measured at the location of the sensors as a scattered field usc(r). Mathematically, the relationship between the object and the light can be accurately described by the Lippmann-Schwinger equation [35]

u(r)=uin(r)+Ωg(rr)f(r)u(r)dr,(rΩ)
where u(r) = uin(r)+ usc(r) is the total electric field, f(r), ≜ k2(ϵ(r) − ϵb is the scattering potential, assumed to be real, and k = 2π/λ is the wavenumber. In two-dimensions, the Green’s function g(r) is given by
g(r)j4H0(1)(kbr2)
where kbkϵb is the wavenumber of the background medium and H0(1) is the zero-order Hankel function of the first kind. Given the total-field u inside the image domain Ω, the scattered field at the sensor is given by
usc(r)=Ωg(rr)f(r)u(r)dr,(rΓ)
By discretizing equations (1) and (3), we can obtain the discrete formulation of scattering
u=uin+G(ux)
y=S(ux)+e,
where x ∈ ℝN is the discretized scattering potential f, y ∈ ℂM is the measured scattered field usc at Γ, uin ∈ ℂN is the input field uin inside Ω, S ∈ ℂM×N is the discretization of the Green’s function evaluated at Γ, G ∈ ℂN×N is the discretization of the Green’s function evaluated inside Ω, · denotes a component-wise multiplication between two vectors, and e ∈ ℂM models the additive noise at the measurements. The final nonlinear forward model can then be formally specified as follows
H(x)S(u(x)x)whereu(x)argminuN{12uC(ux)uin22},
which enables the formulation of the following inverse problem
y=H(x)+e,
where the goal is to recover the unknown image x ∈ ℝN from the noisy measurements y ∈ ℂM. It is often assumed that the measurement noise e ∈ ℂM is independent and identically distributed (i.i.d.) Gaussian vector.

 figure: Fig. 1

Fig. 1 We consider an object of a scattering potential f(r), illuminated with an input wave uin, which interacts with the object, and leads to the scattered field usc at the sensors.

Download Full Size | PPT Slide | PDF

In practice, the problem in (6) is often ill-posed and the standard approach for solving it is by formulating an optimization problem

x^=argminxN{12yH(x)22+(x)},
where the data-fidelity term ensures that the final image is consistent with measured data and the regularizer promotes solutions with desirable properties. Two common regularizers for images include the spatial sparsity-promoting penalty (x)τx1 and total variation (TV) penalty (x)τDx1, where D is the discrete gradient operator [16,36,37]. Additionally, the recent plug-and-play priors (PnP) framework [38–40] popularized the usage of more general denoisers for regularizing the image reconstruction process.

Several recent publications have proposed efficient solutions to the problem (7), where H (·) corresponds to forward scattering problem [12, 13, 20, 34]. The central idea is to find an efficient method to compute the gradient of the data-fidelity term in (7) and combine it with the proximal of the regularizer within FISTA [41] or ADMM [42]. For example, the Recursive Born Method (RBM) in [20] expands H (·) using the Born series expansion [35], and uses the chain rule to efficiently evaluate the gradient. However, for strongly scattering objects this series expansion might diverge, which limits the practical applicability of RBM. The SEAGLE method from [13] circumvents this problem by considering a more general expansion of the data-fidelity by using an arbitrary algorithmic procedure, such as the accelerated gradient method [43]. The combination of SEAGLE with various regularizers such as TV and BM3D [44] has led to state-of-the-art algorithms for inverse scattering [34]. More recently, it was shown in [12] that it is possible to obtain an identical reconstruction quality to SEAGLE at a lower computational cost by efficient evaluation of the Jacobian of (5). The combination of [12] (subsequently denoted as LS) with suitable regularizers is currently the state-of-the-art in inverse scattering. In the next section, we develop an alternative that is based on inverting the multiple scattering with a ConvNet trained end-to-end.

3. Scattering decoder

We now describe our proposed deep learning approach called Scattering Decoder (ScaDec).

3.1. Backprojection

The general framework of our approach is visually illustrated in Fig. 2. The first-step in the method is backprojection, which simply transforms the collected data from the measurement domain to the image domain. We define the backprojection of the measurements generated by the kth transmitter as

zk=PkykwithPkdiag(uin,k*)SH,
where vector yk ∈ CM are the measurements consistent with the kth transmitter and collected by M receivers, and matrix Pk ∈ ℂN×M is the backprojection operator. Inside the operator Pk, matrix SH ∈ ℂN×M is the Hermitian transpose of the discretized Green’s function S, and uin,k* is the element-wise conjugate of the incident light field emitted by the kth transmitter. The output zk ∈ ℂN is a complex vector with N elements, which matches the number of pixels in the original image. When the data is collected with multiple transmissions, we define the backprojection of K transmitters as
w=k=1Kzk=k=1KPkyk,
where vector w ∈ ℂN is the linear combination of zk and K denotes the number of transmitters. Note that the backprojection (9) does not rely on the actual forward model H(·) in (5) which is both nonlinear and object dependent. Remarkably, as we shall see, our simple backprojection followed by a specific ConvNet architecture will be sufficient to recover a high-quality image given multiple scattered measurements.

 figure: Fig. 2

Fig. 2 The overview of the proposed approach that first backpropagates the data into a complex valued image and then maps this image the into final image with a ConvNet.

Download Full Size | PPT Slide | PDF

Note that since w is a complex vector, we consider its real and complex parts as two distinct feature maps of the object f. Thus, the backprojection can be viewed as a fixed layer in a ConvNet with M inputs and two outputs to the subsequent layers (see Fig. 2) [45]. The weights inside the layer are characterized by Pk’s, and the activation functions for the output nodes are Re(·) and Im(·), respectively.

3.2. U-Net decoder

We design the ScaDec model based on the popular U-Net architecture [45], which was recently applied to various image reconstruction tasks such as X-ray CT [27,31]. Fig. 3 shows a detailed diagram of the proposed ConvNet architecture. There are two key properties that recommend U-Net for our purpose.

 figure: Fig. 3

Fig. 3 Visual illustration of the proposed learning architecture based on U-Net [45]. The input consists of two channels for the real and imaginary parts of the backpropagated vector w ∈ ℂN. The output is a single image of the scattering potential x ∈ ℝN.

Download Full Size | PPT Slide | PDF

  1. Multi-resolution decomposition: The decoder employs a contraction-expansion structure based on the max-pooling and the up-convolution. This means that given a fixed size convolution kernel (3 × 3 in our case), the effective receptive field of the network increases as the input goes deeper into the network.
  2. Local-global composition: In each resolution level, the outputs of the convolutional block in the contraction are directly connected and concatenated with the input of the convolutional block in the expansion. The skip connection enables the later layers to reconstruct the feature maps with both the local details and the global texture.

To conclude this section, we described a trainable, end-to-end mapping from the multiple scattered measurements to the distribution of the permittivity. This approach is fundamentally different from formulations based on traditional regularized optimization [12, 13, 20] or on PnP [34]. The key advantage of end-to-end training is that it does not require iterative evaluation of H(·) or its Jacobian, which makes ScaDec appealing from computational perspective. However, this also means that the method does not explicitly impose consistency with the measurements y, which could potentially boost its performance further. Nonetheless, the power of ScaDec is corroborated by the results in Section 4, demonstrating the ability of the method to efficiently form high-quality images from multiple scattered measurements.

4. Experimental validation

We now present the results of validating our method on simulated and experimental datasets. We evaluate the data-adaptive recovery capability of ScaDec by selecting datasets that consist of images with nontrivial features that can be well represented by a ConvNet, but are not well captured by fixed regularizers such as TV. The first simulated dataset consists of synthetically generated piecewise-smooth images with sharp edges and smooth Gaussian regions. The second simulated dataset consists of public human face images [47]. The experimental dataset is the public dataset provided by the Fresnel Institute [48], which consists of experimental microwave measurements of the scattered electric field from 2D targets consisting of foam and plastic cylinders.

4.1. Results on simulated datasets

The two simulated datasets were obtained by using a high-fidelity simulation of multiple scattering with the conjugate-gradient solver. Each of the datasets contains 1548 images, separated into 1500 images for training, 24 images for validation, and 24 images for testing. The physical size of images was set to 18 cm × 18 cm, discretized to a 128 × 128 grid. The background medium was assumed to be air with ϵb = 1 and the wavelength of the illumination was set to λ = 0.84 cm. The measurements were collected from 40 transmissions uniformly distributed along a circle of radius 1.6 m and, for each transmission, 360 measurements around the object were recorded. The simulated scattered data was additionally corrupted by an additive white Gaussian noise corresponding to 20 dB of input signal-to-noise ratio (SNR).

We evaluated the proposed model in two distinct scenarios associated with the weak and strong scattering. Weak scattering corresponds to the regime where first Born approximation is valid. In particular, we defined the permittivity contrast as fmax ≜ (ϵmaxϵb)/ϵb, where ϵmax ≜ maxr ∈Ω{ϵ(r)}. The permittivity contrast quantifies the degree of nonlinearity in the inverse problem, with higher fmax indicating stronger levels of multiple scattering. We regarded the weakly scattering scenario as fmax = 10−6, whereas the strong scattering scenario was considered as fmax = 5 × 10−2. For each scenario, we trained a separate ScaDec architecture using the corresponding training dataset with the reconstruction mean squared error (MSE) as the loss function. For quantitively measuring the quality of the reconstructed image x^ with respect to the true image x, we used the signal-to-noise ratio (SNR) defined as

SNR(x,x^)maxa,b{10log10(x22xax^+b22)},
where higher values of SNR correspond to a better match between the true and reconstructed images. As illustrated in Fig. 4, we observed no issues with the convergence of the training for our architecture and datasets. The training was performed using the popular Adam method for stochastic optimization [49]. Note that all the SNR and visual results were obtained on a distinct dataset that does not contain images used in training.

 figure: Fig. 4

Fig. 4 Illustration of the convergence of the training on the dataset of piecewise-smooth objects. The left figure shows the training loss and the right figure shows the validation loss. The horizontal lines on the right show the losses of other algorithms on the same data.

Download Full Size | PPT Slide | PDF

Table 4.1 summarizes the results of comparing ScaDec against the baseline optimization-based methods corresponding to three different regularizers: nonnegativity constraints on the image, TV, and a plug-and-play prior (PnP) BM3D from [34]. For the first two regularizers, we consider the effects of the linearity versus nonlinearity of the measurement model. The linear measurement model is obtained by using the first Born-approximation, while the nonlinear model takes into account multiple scattering by using the full Lippmann-Schwinger equations [12,14]. For the BM3D prior, we consider only the nonlinear LS model. Fig. 5 additionally shows some visual examples of the reconstructed images for each scenario under consideration. Note that the regularization parameters for TV and BM3D were optimized for the best SNR performance for all the experiments.

Tables Icon

Table 1. SNR (dB) comparison of six methods on two datasets

 figure: Fig. 5

Fig. 5 Simulated Datasets: Visual comparison of the reconstructed images using the linear model with the first Born approximation regularized by imposing non-negativity [46] (FB-NN, column 2) and the total variation [17] (FB-TV, column 4), the non-linear method in [14] regularized by imposing non-negativity (LS-NN, column 3) and the total variation (LS-TV, column 5), the reconstruction by using BM3D as a plug-and-play prior (LS-BM3D, column 6). The values above images show the SNR (dB) of the reconstruction. The first column shows the true images. Each row corresponds to a different scattering scenario, which is denoted above the leading true image.

Download Full Size | PPT Slide | PDF

The results confirm that as the level of scattering increases, the performance under the linear inverse problem formulation based on the first Born approximation degenerates with or without regularization. While TV substantially improves the SNR, it also imposes a piecewise-constant structure, leading to blocky artifacts visible in Fig. 5. While BM3D prior leads to further SNR improvements, it leads to strongly visible artifacts in the reconstructed image. On the other hand, the output of ScaDec leads to both high SNR values and to more natural looking images free of blocky artifacts. ScaDec also enjoys good stability in terms of performance, where the reconstruction SNR is nearly identical in weakly and strongly scattering regimes.

Fig. 6 illustrates the stability of ScaDec with respect to changes in the SNR of the noise when scattering is strong. The lines in the figure correspond to different datasets and show that the performance is relatively stable, in the sense that the degradation in quality is gradual as the input noise varies away from the trained value. For example, ScaDec trained at 20 dB of input SNR gives reconstruction SNR of 19.88 dB, 20.21 dB, and 20.30 dB at input SNR levels of 19 dB, 20 dB, and 21 dB, respectively, on human faces.

 figure: Fig. 6

Fig. 6 The illustration of the performance of ScaDec under strong scattering, trained at 20 dB noise SNR, but tested at noise levels in the SNR range of 15–25 dB.

Download Full Size | PPT Slide | PDF

Finally, the key benefit of ScaDec is its low computational complexity during the reconstruction stage, where each reconstruction corresponds to a simple forward pass through the ConvNet. In our case, all the optimization-based methods were run on a pair of 2 CPUs (Intel Xeon processor E5-2620 v4) during testing, while ScaDec was evaluated on a single GPU (NVIDIA GeForce GTX 1080 Ti). We observed that the reconstructing time of ScaDec for a single image was less than 2 seconds in all scenarios, while all the LS-based methods took over 8 and 35 minutes to reconstruct one image in the weakly and strongly scattering cases, respectively. This dramatic acceleration obtained with ScaDec makes it particularly interesting for imaging under multiple scattering, where optimization-based approaches are computationally expensive.

4.2. Results on experimental datasets

For the experimental validation, two 2D settings were considered: FoamDielExtTM and FoamDielIntTM that consist of a fixed foam cylinder and a plastic cylinder located outside or inside of the foam (Fig. 8). In both settings, the objects were placed within a 18 cm × 18 cm square region, discretized to 128 × 128 grid, hence, the pixel size of the reconstructed images was 1.4 mm × 1.4 mm. Total 8 transmitters were uniformly distributed along a circle of radius 1.78 m, emitting electromagnetic wave towards the objects, and the measurements of the scattered wave were recorded by 360 receivers. Though the dataset contains measurements of a range of wave frequencies, we only consider the case of 5 GHz; hence, the wavelength of the transmission is λ = 60 mm. The background medium was air with ϵb = 1. The permittivity contrasts of foam and plastic were measured as fmax = 0.45 and fmax = 2, respectively [48].

 figure: Fig. 7

Fig. 7 Illustration of five randomly-generated images used for training ScaDec to reconstruct from experimental measurements: FoamDielExtTM and FoamDielIntTM.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8

Fig. 8 Experimental Dataset: Reconstructed images obtained by ScaDec, LS-TV and FB-TV from the data of 2D experimental measurements. The first and second row relate to the setting of FoamDielExtTM and FoamDielIntTM, respectively. The first column shows the ground truth of each setting. The size of all reconstructed images are 128 × 128 pixels. Note that the colormap for FB-TV is different from the rest because the permittivity contrast was extremely underestimated by FB.

Download Full Size | PPT Slide | PDF

For different settings, we trained the same ScaDec architecture with 6500 pairs of 128 × 128 synthetic images and their simulated scattered measurements. The measurements were generated by computing the multiple scattering measurements governed by Lippmann-Schwinger equations. Each image was synthesized with one larger circle with a lower contrast and another smaller circle with a higher contrast. The locations and the radii of the two circles were randomly generated. Fig. 7 illustrates a random subset of five images generated for training. Furthermore, all measurements were corrupted with an additive white Gaussian noise corresponding to 20 dB of input SNR. The ScaDec was trained for 1000 epochs to minimize the MSE between the true image and the restored image.

Visual results of the reconstructed images of different methods are shown in Fig. 8, where we compare ScaDec against LS-TV and FB-TV. The visual illustration using the BM3D prior was omitted, as it was shown in [34] to be suboptimal with respect to TV on this piecewise-constant dataset. The first column shows the ground truth of the foam cylinder (light blue) and the plastic cylinder (bright yellow) in each setting. The linear model FB-TV dramatically underestimates the permittivity distribution and fails to reconstruct the shape of objects. On the other hand, the nonlinear model of LS-TV produces better reconstructed images by taking into account both the multiple scattering and the piecewise-constant nature of the image. Finally, the proposed method obtains the highest quality reconstruction in terms of both the contrast value and the shape of objects. The edges of the foam and plastic were clear and sharp, and no obvious degradation of the contrast value was observed. Visually, the results of ScaDec look very close to the ground truth, which is due to the capability of the framework to adapt to the features in the training dataset. Remarkably, the experimental results also illustrate the potential of using simulated data for training, and then deploying the trained ConvNet for image formation from experimental data.

5. Conclusion

We designed and experimentally demonstrated the first deep learning architecture for solving a multiple scattering problem in diffraction tomography. The proposed method, called ScaDec, successfully reconstructed high-quality images and outperformed state-of-art optimization-based methods in all scenarios. Remarkably, the method trained on simulated data, also succeeded in reconstructing images from real experimental data consisting of highly scattering objects. One of the key advantages of the proposed approach is that the actual process of image formation is substantially accelerated compared to optimization-based reconstruction methods. These features make ScaDec a promising alternative to optimization based methods and opens rich perspectives for efficient correction of scattering in biological samples.

Acknowledgments

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU for research.

References and links

1. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007). [CrossRef]   [PubMed]  

2. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009). [CrossRef]   [PubMed]  

3. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969). [CrossRef]  

4. A. J. Devaney, “Inverse-scattering theory within the Rytov approximation,” Opt. Lett. 6, 374–376 (1981). [CrossRef]   [PubMed]  

5. B. Chen and J. J. Stamnes, “Validity of diffraction tomography based on the first born and the first rytov approximations,” Appl. Opt. 37, 2996–3006 (1998). [CrossRef]  

6. K. Belkebir and A. Sentenac, “High-resolution optical diffraction microscopy,” J. Opt. Soc. Am. A 20, 1223–1229 (2003). [CrossRef]  

7. K. Belkebir, P. C. Chaumet, and A. Sentenac, “Superresolution in total internal reflection tomography,” J. Opt. Soc. Am. A 22, 1889–1897 (2005). [CrossRef]  

8. E. Mudry, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Electromagnetic wave imaging of three-dimensional targets using a hybrid iterative inversion method,” Inv. Probl. 28, 065007 (2012). [CrossRef]  

9. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]  

10. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015). [CrossRef]  

11. T. Zhang, C. Godavarthi, P. C. Chaumet, G. Maire, H. Giovannini, A. Talneau, M. Allain, K. Belkebir, and A. Sentenac, “Far-field diffraction microscopy at λ/10 resolution,” Optica 3, 609–612 (2016). [CrossRef]  

12. E. Soubies, T.-A. Pham, and M. Unser, “Efficient inversion of multiple-scattering model for optical diffraction tomography,” Opt. Express 25, 21786–21800 (2017). [CrossRef]   [PubMed]  

13. H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018). [CrossRef]  

14. Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).

15. A. Ribés and F. Schmitt, “Linear inverse problems in imaging,” IEEE Signal Process. Mag. 25, 84–99 (2008). [CrossRef]  

16. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992). [CrossRef]  

17. Y. Sung and R. R. Dasari, “Deterministic regularization of three-dimensional optical diffraction tomography,” J. Opt. Soc. Am. A 28, 1554–1561 (2011). [CrossRef]  

18. J. W. Lim, K. R. Lee, K. H. Jin, S. Shin, S. E. Lee, Y. K. Park, and J. C. Ye, “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23, 16933–16948 (2015). [CrossRef]   [PubMed]  

19. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016). [CrossRef]  

20. U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016). [CrossRef]  

21. T.-A. Pham, E. Soubies, A. Goy, J. Lim, F. Soulez, D. Psaltis, and M. Unser, “Versatile reconstruction framework for diffraction tomography with intensity measurements and multiple scattering,” Opt. Express 26, 2749–2763 (2018). [CrossRef]   [PubMed]  

22. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of ECCV, (Zurich, Switzerland, 2014), pp. 184–199.

23. U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Columbus, OH, USA, 2014), pp. 2774–2781.

24. A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, (Allerton Park, IL, USA, 2015), pp. 1336–1343.

25. Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffuction processes for effective image restoration,” in Proceedings of te IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Boston, MA, USA, 2015), pp. 5261–5269.

26. U. S. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” IEEE Signal Process. Lett. 23, 747–751 (2016). [CrossRef]  

27. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017). [CrossRef]  

28. M. Borgerding and P. Schniter, “Onsanger-corrected deep networks for sparse linear inverse problems,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Washington, DC, USA, 2016).

29. “Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis,” https://arxiv.org/abs/161106391.

30. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017). [CrossRef]  

31. J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM J. Imaging Sci. 11, 991–1048 (2018). [CrossRef]  

32. E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for low-dose ct via wavelet residual network,” IEEE Trans. Med. Imaging, (in press) (2018). [CrossRef]  

33. J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S.-I. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y.-W. Choi, S. Cho, and J. C. Ye, “Deep learning can reverse photon migration for diffuse optical tomography,” https://arxiv.org/abs/171200912.

34. U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let. 24, 1872–1876 (2017). [CrossRef]  

35. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University Press, 2003), pp. 695–734.

36. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006). [CrossRef]  

37. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). [CrossRef]  

38. S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Austin, TX, USA, 2013), pp. 945–948.

39. S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

40. S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag. 3, 84–98 (2017). [CrossRef]  

41. A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009). [CrossRef]   [PubMed]  

42. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010). [CrossRef]   [PubMed]  

43. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, 2004). [CrossRef]  

44. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007). [CrossRef]   [PubMed]  

45. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in “Medical Image Computing and Computer-Assisted Intervention (MICCAI),”, vol. 9351 of LNCS (Springer, 2015), pp. 234–241.

46. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef]   [PubMed]  

47. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision (ICCV), (Santiago, 2015).

48. J.-M. Geffrin, P. Sabouroux, and C. Eyraud, “Free space experimental scattering database continuation: experimental set-up and measurement precision,” Inv. Probl. 21, S117–S130 (2005). [CrossRef]  

49. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (San Diego, 2015).

References

  • View by:

  1. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
    [Crossref] [PubMed]
  2. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
    [Crossref] [PubMed]
  3. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969).
    [Crossref]
  4. A. J. Devaney, “Inverse-scattering theory within the Rytov approximation,” Opt. Lett. 6, 374–376 (1981).
    [Crossref] [PubMed]
  5. B. Chen and J. J. Stamnes, “Validity of diffraction tomography based on the first born and the first rytov approximations,” Appl. Opt. 37, 2996–3006 (1998).
    [Crossref]
  6. K. Belkebir and A. Sentenac, “High-resolution optical diffraction microscopy,” J. Opt. Soc. Am. A 20, 1223–1229 (2003).
    [Crossref]
  7. K. Belkebir, P. C. Chaumet, and A. Sentenac, “Superresolution in total internal reflection tomography,” J. Opt. Soc. Am. A 22, 1889–1897 (2005).
    [Crossref]
  8. E. Mudry, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Electromagnetic wave imaging of three-dimensional targets using a hybrid iterative inversion method,” Inv. Probl. 28, 065007 (2012).
    [Crossref]
  9. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
    [Crossref]
  10. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
    [Crossref]
  11. T. Zhang, C. Godavarthi, P. C. Chaumet, G. Maire, H. Giovannini, A. Talneau, M. Allain, K. Belkebir, and A. Sentenac, “Far-field diffraction microscopy at λ/10 resolution,” Optica 3, 609–612 (2016).
    [Crossref]
  12. E. Soubies, T.-A. Pham, and M. Unser, “Efficient inversion of multiple-scattering model for optical diffraction tomography,” Opt. Express 25, 21786–21800 (2017).
    [Crossref] [PubMed]
  13. H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
    [Crossref]
  14. Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).
  15. A. Ribés and F. Schmitt, “Linear inverse problems in imaging,” IEEE Signal Process. Mag. 25, 84–99 (2008).
    [Crossref]
  16. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
    [Crossref]
  17. Y. Sung and R. R. Dasari, “Deterministic regularization of three-dimensional optical diffraction tomography,” J. Opt. Soc. Am. A 28, 1554–1561 (2011).
    [Crossref]
  18. J. W. Lim, K. R. Lee, K. H. Jin, S. Shin, S. E. Lee, Y. K. Park, and J. C. Ye, “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23, 16933–16948 (2015).
    [Crossref] [PubMed]
  19. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016).
    [Crossref]
  20. U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016).
    [Crossref]
  21. T.-A. Pham, E. Soubies, A. Goy, J. Lim, F. Soulez, D. Psaltis, and M. Unser, “Versatile reconstruction framework for diffraction tomography with intensity measurements and multiple scattering,” Opt. Express 26, 2749–2763 (2018).
    [Crossref] [PubMed]
  22. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of ECCV, (Zurich, Switzerland, 2014), pp. 184–199.
  23. U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Columbus, OH, USA, 2014), pp. 2774–2781.
  24. A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, (Allerton Park, IL, USA, 2015), pp. 1336–1343.
  25. Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffuction processes for effective image restoration,” in Proceedings of te IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Boston, MA, USA, 2015), pp. 5261–5269.
  26. U. S. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” IEEE Signal Process. Lett. 23, 747–751 (2016).
    [Crossref]
  27. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
    [Crossref]
  28. M. Borgerding and P. Schniter, “Onsanger-corrected deep networks for sparse linear inverse problems,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Washington, DC, USA, 2016).
  29. “Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis,” https://arxiv.org/abs/161106391 .
  30. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  31. J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM J. Imaging Sci. 11, 991–1048 (2018).
    [Crossref]
  32. E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for low-dose ct via wavelet residual network,” IEEE Trans. Med. Imaging, (in press) (2018).
    [Crossref]
  33. J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S.-I. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y.-W. Choi, S. Cho, and J. C. Ye, “Deep learning can reverse photon migration for diffuse optical tomography,” https://arxiv.org/abs/171200912 .
  34. U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let. 24, 1872–1876 (2017).
    [Crossref]
  35. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University Press, 2003), pp. 695–734.
  36. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
    [Crossref]
  37. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
    [Crossref]
  38. S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Austin, TX, USA, 2013), pp. 945–948.
  39. S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).
  40. S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag. 3, 84–98 (2017).
    [Crossref]
  41. A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
    [Crossref] [PubMed]
  42. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
    [Crossref] [PubMed]
  43. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, 2004).
    [Crossref]
  44. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
    [Crossref] [PubMed]
  45. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in “Medical Image Computing and Computer-Assisted Intervention (MICCAI),”, vol. 9351 of LNCS (Springer, 2015), pp. 234–241.
  46. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009).
    [Crossref] [PubMed]
  47. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision (ICCV), (Santiago, 2015).
  48. J.-M. Geffrin, P. Sabouroux, and C. Eyraud, “Free space experimental scattering database continuation: experimental set-up and measurement precision,” Inv. Probl. 21, S117–S130 (2005).
    [Crossref]
  49. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (San Diego, 2015).

2018 (3)

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

T.-A. Pham, E. Soubies, A. Goy, J. Lim, F. Soulez, D. Psaltis, and M. Unser, “Versatile reconstruction framework for diffraction tomography with intensity measurements and multiple scattering,” Opt. Express 26, 2749–2763 (2018).
[Crossref] [PubMed]

J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM J. Imaging Sci. 11, 991–1048 (2018).
[Crossref]

2017 (5)

U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let. 24, 1872–1876 (2017).
[Crossref]

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag. 3, 84–98 (2017).
[Crossref]

E. Soubies, T.-A. Pham, and M. Unser, “Efficient inversion of multiple-scattering model for optical diffraction tomography,” Opt. Express 25, 21786–21800 (2017).
[Crossref] [PubMed]

2016 (5)

T. Zhang, C. Godavarthi, P. C. Chaumet, G. Maire, H. Giovannini, A. Talneau, M. Allain, K. Belkebir, and A. Sentenac, “Far-field diffraction microscopy at λ/10 resolution,” Optica 3, 609–612 (2016).
[Crossref]

U. S. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” IEEE Signal Process. Lett. 23, 747–751 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016).
[Crossref]

U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016).
[Crossref]

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

2015 (3)

2012 (1)

E. Mudry, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Electromagnetic wave imaging of three-dimensional targets using a hybrid iterative inversion method,” Inv. Probl. 28, 065007 (2012).
[Crossref]

2011 (1)

2010 (1)

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref] [PubMed]

2009 (3)

2008 (1)

A. Ribés and F. Schmitt, “Linear inverse problems in imaging,” IEEE Signal Process. Mag. 25, 84–99 (2008).
[Crossref]

2007 (2)

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref] [PubMed]

2006 (2)

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

2005 (2)

J.-M. Geffrin, P. Sabouroux, and C. Eyraud, “Free space experimental scattering database continuation: experimental set-up and measurement precision,” Inv. Probl. 21, S117–S130 (2005).
[Crossref]

K. Belkebir, P. C. Chaumet, and A. Sentenac, “Superresolution in total internal reflection tomography,” J. Opt. Soc. Am. A 22, 1889–1897 (2005).
[Crossref]

2003 (1)

1998 (1)

1992 (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

1981 (1)

1969 (1)

E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969).
[Crossref]

Afonso, M. V.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref] [PubMed]

Allain, M.

Ba, J.

D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (San Diego, 2015).

Badizadegan, K.

Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009).
[Crossref] [PubMed]

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

Baraniuk, R. G.

A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, (Allerton Park, IL, USA, 2015), pp. 1336–1343.

Barbastathis, G.

Beck, A.

A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
[Crossref] [PubMed]

Belkebir, K.

Bioucas-Dias, J. M.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref] [PubMed]

Borgerding, M.

M. Borgerding and P. Schniter, “Onsanger-corrected deep networks for sparse linear inverse problems,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Washington, DC, USA, 2016).

Born, M.

M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University Press, 2003), pp. 695–734.

Boufounos, P. T.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016).
[Crossref]

Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).

Bouman, C. A.

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Austin, TX, USA, 2013), pp. 945–948.

Brady, D. J.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in “Medical Image Computing and Computer-Assisted Intervention (MICCAI),”, vol. 9351 of LNCS (Springer, 2015), pp. 234–241.

Buzzard, G. T.

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

Candès, E. J.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

Cha, E.

J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM J. Imaging Sci. 11, 991–1048 (2018).
[Crossref]

Chan, S. H.

S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag. 3, 84–98 (2017).
[Crossref]

Chang, W.

E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for low-dose ct via wavelet residual network,” IEEE Trans. Med. Imaging, (in press) (2018).
[Crossref]

Chaumet, P. C.

Chen, B.

Chen, Y.

Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffuction processes for effective image restoration,” in Proceedings of te IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Boston, MA, USA, 2015), pp. 5261–5269.

Choi, K.

Choi, W.

Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009).
[Crossref] [PubMed]

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref] [PubMed]

Dasari, R. R.

Devaney, A. J.

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of ECCV, (Zurich, Switzerland, 2014), pp. 184–199.

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

Drummy, L. F.

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref] [PubMed]

Elgendy, O. A.

S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag. 3, 84–98 (2017).
[Crossref]

Eyraud, C.

J.-M. Geffrin, P. Sabouroux, and C. Eyraud, “Free space experimental scattering database continuation: experimental set-up and measurement precision,” Inv. Probl. 21, S117–S130 (2005).
[Crossref]

Fang-Yen, C.

Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009).
[Crossref] [PubMed]

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Feld, M. S.

Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009).
[Crossref] [PubMed]

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

Figueiredo, M. A. T.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref] [PubMed]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in “Medical Image Computing and Computer-Assisted Intervention (MICCAI),”, vol. 9351 of LNCS (Springer, 2015), pp. 234–241.

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref] [PubMed]

Froustey, E.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Geffrin, J.-M.

J.-M. Geffrin, P. Sabouroux, and C. Eyraud, “Free space experimental scattering database continuation: experimental set-up and measurement precision,” Inv. Probl. 21, S117–S130 (2005).
[Crossref]

Giovannini, H.

Godavarthi, C.

Goy, A.

Han, Y.

J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM J. Imaging Sci. 11, 991–1048 (2018).
[Crossref]

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of ECCV, (Zurich, Switzerland, 2014), pp. 184–199.

Horisaki, R.

Jin, K. H.

Kamilov, U. S.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let. 24, 1872–1876 (2017).
[Crossref]

U. S. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” IEEE Signal Process. Lett. 23, 747–751 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016).
[Crossref]

U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
[Crossref]

Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).

Kang, E.

E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for low-dose ct via wavelet residual network,” IEEE Trans. Med. Imaging, (in press) (2018).
[Crossref]

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref] [PubMed]

Kingma, D.

D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (San Diego, 2015).

Lee, J.

Lee, K. R.

Lee, S. E.

Li, S.

Lim, J.

Lim, J. W.

Lim, S.

Liu, D.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016).
[Crossref]

Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).

Liu, H.-Y.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

Liu, Z.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision (ICCV), (Santiago, 2015).

Loy, C. C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of ECCV, (Zurich, Switzerland, 2014), pp. 184–199.

Lue, N.

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

Luo, P.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision (ICCV), (Santiago, 2015).

Ma, Y.

Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).

Maire, G.

Mansour, H.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let. 24, 1872–1876 (2017).
[Crossref]

U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016).
[Crossref]

U. S. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” IEEE Signal Process. Lett. 23, 747–751 (2016).
[Crossref]

Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).

Marks, D. L.

McCann, M. T.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Mousavi, A.

A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, (Allerton Park, IL, USA, 2015), pp. 1336–1343.

Mudry, E.

E. Mudry, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Electromagnetic wave imaging of three-dimensional targets using a hybrid iterative inversion method,” Inv. Probl. 28, 065007 (2012).
[Crossref]

Nesterov, Y.

Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, 2004).
[Crossref]

Oh, S.

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Papadopoulos, I. N.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
[Crossref]

Park, Y. K.

Patel, A. B.

A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, (Allerton Park, IL, USA, 2015), pp. 1336–1343.

Pham, T.-A.

Pock, T.

Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffuction processes for effective image restoration,” in Proceedings of te IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Boston, MA, USA, 2015), pp. 5261–5269.

Psaltis, D.

Ribés, A.

A. Ribés and F. Schmitt, “Linear inverse problems in imaging,” IEEE Signal Process. Mag. 25, 84–99 (2008).
[Crossref]

Romberg, J.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in “Medical Image Computing and Computer-Assisted Intervention (MICCAI),”, vol. 9351 of LNCS (Springer, 2015), pp. 234–241.

Roth, S.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Columbus, OH, USA, 2014), pp. 2774–2781.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Sabouroux, P.

J.-M. Geffrin, P. Sabouroux, and C. Eyraud, “Free space experimental scattering database continuation: experimental set-up and measurement precision,” Inv. Probl. 21, S117–S130 (2005).
[Crossref]

Schmidt, U.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Columbus, OH, USA, 2014), pp. 2774–2781.

Schmitt, F.

A. Ribés and F. Schmitt, “Linear inverse problems in imaging,” IEEE Signal Process. Mag. 25, 84–99 (2008).
[Crossref]

Schniter, P.

M. Borgerding and P. Schniter, “Onsanger-corrected deep networks for sparse linear inverse problems,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Washington, DC, USA, 2016).

Sentenac, A.

Shin, S.

Shoreh, M. H.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
[Crossref]

Simmons, J. P.

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

Sinha, A.

Soubies, E.

Soulez, F.

Sreehari, S.

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

Stamnes, J. J.

Sung, Y.

Talneau, A.

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of ECCV, (Zurich, Switzerland, 2014), pp. 184–199.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision (ICCV), (Santiago, 2015).

Tao, T.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

Teboulle, M.

A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
[Crossref] [PubMed]

Tian, L.

Unser, M.

Venkatakrishnan, S. V.

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Austin, TX, USA, 2013), pp. 945–948.

Vonesch, C.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
[Crossref]

Waller, L.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
[Crossref]

Wang, X.

S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag. 3, 84–98 (2017).
[Crossref]

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision (ICCV), (Santiago, 2015).

Wohlberg, B.

U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let. 24, 1872–1876 (2017).
[Crossref]

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Austin, TX, USA, 2013), pp. 945–948.

Wolf, E.

E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969).
[Crossref]

M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University Press, 2003), pp. 695–734.

Ye, J. C.

J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM J. Imaging Sci. 11, 991–1048 (2018).
[Crossref]

J. W. Lim, K. R. Lee, K. H. Jin, S. Shin, S. E. Lee, Y. K. Park, and J. C. Ye, “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23, 16933–16948 (2015).
[Crossref] [PubMed]

E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for low-dose ct via wavelet residual network,” IEEE Trans. Med. Imaging, (in press) (2018).
[Crossref]

Yoo, J.

E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for low-dose ct via wavelet residual network,” IEEE Trans. Med. Imaging, (in press) (2018).
[Crossref]

Yu, W.

Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffuction processes for effective image restoration,” in Proceedings of te IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Boston, MA, USA, 2015), pp. 5261–5269.

Zhang, T.

Appl. Opt. (1)

IEEE Signal Process. Lett. (2)

U. S. Kamilov, D. Liu, H. Mansour, and P. T. Boufounos, “A recursive Born approach to nonlinear inverse scattering,” IEEE Signal Process. Lett. 23, 1052–1056 (2016).
[Crossref]

U. S. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” IEEE Signal Process. Lett. 23, 747–751 (2016).
[Crossref]

IEEE Signal Process. Mag. (1)

A. Ribés and F. Schmitt, “Linear inverse problems in imaging,” IEEE Signal Process. Mag. 25, 84–99 (2008).
[Crossref]

IEEE Signal. Proc. Let. (1)

U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let. 24, 1872–1876 (2017).
[Crossref]

IEEE Trans. Comp. Imag. (3)

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comp. Imag. 2, 59–70, (2016).
[Crossref]

S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comp. Imag. 2, 408–423 (2016).

S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag. 3, 84–98 (2017).
[Crossref]

IEEE Trans. Comput. Imaging (1)

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

IEEE Trans. Image Process. (4)

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
[Crossref] [PubMed]

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref] [PubMed]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref] [PubMed]

IEEE Trans. Inf. Theory (2)

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

Inv. Probl. (2)

J.-M. Geffrin, P. Sabouroux, and C. Eyraud, “Free space experimental scattering database continuation: experimental set-up and measurement precision,” Inv. Probl. 21, S117–S130 (2005).
[Crossref]

E. Mudry, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Electromagnetic wave imaging of three-dimensional targets using a hybrid iterative inversion method,” Inv. Probl. 28, 065007 (2012).
[Crossref]

J. Opt. Soc. Am. A (3)

Nat. Methods (1)

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007).
[Crossref] [PubMed]

Opt. Commun. (1)

E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969).
[Crossref]

Opt. Express (5)

Opt. Lett. (1)

Optica (4)

Phys. D (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

SIAM J. Imaging Sci. (1)

J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM J. Imaging Sci. 11, 991–1048 (2018).
[Crossref]

Other (15)

E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for low-dose ct via wavelet residual network,” IEEE Trans. Med. Imaging, (in press) (2018).
[Crossref]

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S.-I. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y.-W. Choi, S. Cho, and J. C. Ye, “Deep learning can reverse photon migration for diffuse optical tomography,” https://arxiv.org/abs/171200912 .

M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University Press, 2003), pp. 695–734.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of ECCV, (Zurich, Switzerland, 2014), pp. 184–199.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Columbus, OH, USA, 2014), pp. 2774–2781.

A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, (Allerton Park, IL, USA, 2015), pp. 1336–1343.

Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffuction processes for effective image restoration,” in Proceedings of te IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Boston, MA, USA, 2015), pp. 5261–5269.

M. Borgerding and P. Schniter, “Onsanger-corrected deep networks for sparse linear inverse problems,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Washington, DC, USA, 2016).

“Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis,” https://arxiv.org/abs/161106391 .

Y. Ma, H. Mansour, D. Liu, P. T. Boufounos, and U. S. Kamilov, “Accelerated image reconstruction for nonlinear diffractive imaging,” in Proceedings of the IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), (Calgary, Canada, 2018).

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision (ICCV), (Santiago, 2015).

D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (San Diego, 2015).

S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proceedings of the IEEE Global Conference on Signal Processing and Information Processing (GlobalSIP), (Austin, TX, USA, 2013), pp. 945–948.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in “Medical Image Computing and Computer-Assisted Intervention (MICCAI),”, vol. 9351 of LNCS (Springer, 2015), pp. 234–241.

Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, 2004).
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 We consider an object of a scattering potential f(r), illuminated with an input wave uin, which interacts with the object, and leads to the scattered field usc at the sensors.
Fig. 2
Fig. 2 The overview of the proposed approach that first backpropagates the data into a complex valued image and then maps this image the into final image with a ConvNet.
Fig. 3
Fig. 3 Visual illustration of the proposed learning architecture based on U-Net [45]. The input consists of two channels for the real and imaginary parts of the backpropagated vector w ∈ ℂN. The output is a single image of the scattering potential x ∈ ℝN.
Fig. 4
Fig. 4 Illustration of the convergence of the training on the dataset of piecewise-smooth objects. The left figure shows the training loss and the right figure shows the validation loss. The horizontal lines on the right show the losses of other algorithms on the same data.
Fig. 5
Fig. 5 Simulated Datasets: Visual comparison of the reconstructed images using the linear model with the first Born approximation regularized by imposing non-negativity [46] (FB-NN, column 2) and the total variation [17] (FB-TV, column 4), the non-linear method in [14] regularized by imposing non-negativity (LS-NN, column 3) and the total variation (LS-TV, column 5), the reconstruction by using BM3D as a plug-and-play prior (LS-BM3D, column 6). The values above images show the SNR (dB) of the reconstruction. The first column shows the true images. Each row corresponds to a different scattering scenario, which is denoted above the leading true image.
Fig. 6
Fig. 6 The illustration of the performance of ScaDec under strong scattering, trained at 20 dB noise SNR, but tested at noise levels in the SNR range of 15–25 dB.
Fig. 7
Fig. 7 Illustration of five randomly-generated images used for training ScaDec to reconstruct from experimental measurements: FoamDielExtTM and FoamDielIntTM.
Fig. 8
Fig. 8 Experimental Dataset: Reconstructed images obtained by ScaDec, LS-TV and FB-TV from the data of 2D experimental measurements. The first and second row relate to the setting of FoamDielExtTM and FoamDielIntTM, respectively. The first column shows the ground truth of each setting. The size of all reconstructed images are 128 × 128 pixels. Note that the colormap for FB-TV is different from the rest because the permittivity contrast was extremely underestimated by FB.

Tables (1)

Tables Icon

Table 1 SNR (dB) comparison of six methods on two datasets

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

u ( r ) = u in ( r ) + Ω g ( r r ) f ( r ) u ( r ) d r , ( r Ω )
g ( r ) j 4 H 0 ( 1 ) ( k b r 2 )
u sc ( r ) = Ω g ( r r ) f ( r ) u ( r ) d r , ( r Γ )
u = u in + G ( u x )
y = S ( u x ) + e ,
H ( x ) S ( u ( x ) x ) where u ( x ) arg min u N { 1 2 u C ( u x ) u in 2 2 } ,
y = H ( x ) + e ,
x ^ = arg min x N { 1 2 y H ( x ) 2 2 + ( x ) } ,
z k = P k y k with P k diag ( u in , k * ) S H ,
w = k = 1 K z k = k = 1 K P k y k ,
SNR ( x , x ^ ) max a , b { 10 log 10 ( x 2 2 x a x ^ + b 2 2 ) } ,

Metrics